## TensorFlow Cheatsheet

### General cheatsheet

Initialize a Variable using constants or random values (zeros, zeros_like, linspace, etc.).

### Graph vs. Session

(following the great explanation by Danijar Hafner) ```graph = tf.Graph() with graph.as_default(): v = tf.Variable(tf.random_normal(shape=), name='foo') print v.shape, v.shape.ndims, v.shape.num_elements() # (1,) 1 1 if 1: # Don't do this! tf.global_variables_initializer() defines an op that initializes all variables in the graph (so far); so you should call this AFTER they were all defined, otherwise you'll get something like "FailedPreconditionError: Attempting to use uninitialized value FailedPreconditionError: Attempting to use uninitialized value foo_scalar" init_op_notgood = tf.global_variables_initializer() v_scalar = tf.Variable(tf.random_normal(shape=[]), name='foo_scalar')) # shape=[] or shape=() defines a 0-dimensional tensor, i.e. scalar print v_scalar.shape, v_scalar.shape.ndims, v_scalar.shape.num_elements() # () 0 1 init_op = tf.global_variables_initializer() # Add an op to initialize all variables in the graph (actually probably best to define this even further down after the entire graph was constructed, but defining it here is already okay for our example) assign_v = v.assign() assign_v_scalar = v_scalar.assign(102) c = tf.constant(4.0) # Will be defined as attached to the default graph (!) tf.get_default_graph() # Sanity check print c.graph == tf.get_default_graph(), variable.graph == graph, graph == tf.get_default_graph() # True True False ```

Then instantiate a Session to run our graph:

```with tf.Session(graph=graph) as sess: sess.run(init_op) print sess.run(v) # e.g. [-0.407900009] print sess.run(v_scalar) # e.g. 1.30248 if 1: # Don't do this part sess.run(init_op_notgood) print sess.run(v) # e.g. [0.33414543], a different value than above print sess.run(v_scalar) # e.g. 1.30248, same as above print sess.run([assign_v, assign_v_scalar]) # [array([ 101.], dtype=float32), 102.0] -- return values probably not really interesting here print sess.run(v) # [ 101.] print sess.run(v_scalar) # 102.0 if 0: # Error, as 'c' is not an element of the current graph... print sess.run(c) # ValueError: Fetch argument <tf.Tensor 'Const_50:0' shape=() dtype=float32> cannot be interpreted as a Tensor. (Tensor Tensor("Const_50:0", shape=(), dtype=float32) is not an element of this graph.) ```

## Docker in Windows, allocate more memory

### Some installation / cheatsheet

Using Docker for Windows, Windows 10. 17.03.0-ce-win1 (10296). Channel: stable. 94675c5.

(Tip: Also install Kitematic, then you can right-click the running Docker icon and from there launch Kitematic for a nice GUI to see your containers.)

Basic commands:

```# Create a tensorflow container:
docker run -it -p 8888:8888 -p 6006:6006 --name my_tensor_flow -v C:/Data/Docker:/data -v C:/Dev/python:/devpython tensorflow/tensorflow
# Run it in the future:
docker start -ai my_tensor_flow
# To connect to it with bash: (note: /notebooks is where the notebooks are kept, and we can already access the host's C:/Data/Docker in /data)
docker exec -it my_tensor_flow bash
# Then from the bash can run: tensorboard --logdir=/data/tensorboard/5/
```

Apart from the nice GUI, there are some useful commands from the command line:

```docker stats [--all]
docker ps [--all]
docker container list [--all]```

### Allocating more memory

For my running container, `docker stats` showed (in “MEM USAGE / LIMIT”) that it was bounded by 1.934 GiB. Indeed, “Hyper-V Manager” showed “MobyLinuxVM” machine has only 2GB.

To allocate more memory: Docker’s settings, Advanced, set Docker’s memory e.g. to 8448MB. Docker will restart. Verify that “Hyper-V Manager” now shows “MobyLinuxVM” has assigned memory of 8448 MB. Run your previous container `docker start -ai container-name`, and now `docker stats` will show a memory bound of 8.003 GiB. Success.

Note: Running some

`docker run -it -p ... --name ... --memory-swap -1 --memory 8g -v ...`
will not work if your “MobyLinuxVM” doesn’t have enough assigned memory.