## Bonus content for just $7!

Get our exercise solution bundle, with loads of extra code, solutions to all exercises and updates for life - we add a new lesson, you get more code at no cost!

Buy it now, support the site, and thanks for your support!

# Placeholders

So far we have used `Variables`

to manage our data, but there is a more basic structure, the `placeholder`

.
A `placeholder`

is simply a variable that we will assign data to at a later date.
It allows us to create our operations and build our computation graph, without needing the data.
In `TensorFlow`

terminology, we then *feed* data into the graph through these placeholders.

This example works a little differently from our previous ones, let’s break it down.

First, we import `tensorflow`

as normal. Then we create a `placeholder`

called `x`

, i.e. a place in memory where we will store value later on.

Then, we create a Tensor called, which is the operation of multiplying `x`

by 2.
Note that we haven’t defined any initial values for `x`

yet.

We now have an operation (`y`

) defined, and can now run it in a session.
We create a session object, and then run just the `y`

variable.
Note that this means, that if we defined a much larger graph of operations, we can run just a small segment of the graph.
This subgraph evaluation is actually a bit selling point of TensorFlow, and one that isn’t present in many other libraries that do similar things.

Running `y`

requires knowledge about the values of `x`

. We define these inside the `feed_dict`

argument to `run`

.
We state here that the values of `x`

are `[1, 2, 3]`

.
We run `y`

, giving us the result of `[2, 4, 6]`

.

Placeholders do not need to be statically sized. Let’s update our program to allow x to take on any length.
Change the definition of `x`

to be:

Now, when we define the values of `x`

in the `feed_dict`

we can have any number of values.
The code should still work, and give the same answer, but now it will also work with any number of values in `feed_dict`

.

Placeholders can also have multiple dimensions, allowing for storing arrays. In the following example, we create a 3 by 2 matrix, and store some numbers in it. We then use the same operation as before to do element-wise doubling of the numbers.

The first dimension of the placeholder is `None`

, meaning we can have any number of rows.
The second dimension is fixed at 3, meaning each row needs to have three columns of data.

We can extend this to take an arbitrary number of `None`

dimensions.
In this example, we load up the image from our last lesson, then create a placeholder that stores a slice of that image.
The slice is a 2D segment of the image, but each “pixel” has three components (red, green, blue).
Therefore, we need `None`

for the first two dimensions, but need `3`

(or `None`

would work) for the last dimension.
We then use TensorFlow’s `slice`

method to take a subsegment out of the image to operate on.

The result is a subsegment of the image:

## Exercises

1) Take a look at the other functions for arrays in TensorFlow at the official documentation.

2) Break the image apart into four “corners”, then stitch it back together again.

3) Convert the image into grayscale. One way to do this would be to take just a single colour channel and show that. Another way would be to take the average of the three channels as the gray colour.

# Stuck? Get the exercise solutions here

If you are looking for solutions on the exercises, or just want to see how I solved them, then our solutions bundle is what you are after.

Buying the bundle gives you **free updates for life** - meaning when we add a new
lesson, you get an updated bundle with the solutions.

It's just $7, and it also helps us to keep running the site with free lessons.

### Grow your business with data analytics

Looking to improve your business through data analytics? Are you interested in implementing data mining, automation or artificial intelligence?

This book is the ultimate guide to getting started with using data in your business, with a non-technical view and focusing on achieving good outcomes. We don't get bogged down by technical detail or complex algorithms.For additional offers, including a premium package, see this page.

You can also support LearningTensorFlow.com by becoming a patron at Patreon. If we have saved you trawling through heavy documentation, or given you a pointer on where to go next, help us to create new lessons and keep the site running.

You'll also get access to extra content and updates not available on LearningTensorFlow.com!

### Coming soon!

Note: despite the title, this book has no relationship to LearningTensorFlow.com