Cheatsheets: Part-I

Tensorflow is an end-to-end open-source platform for high-performance numerical computation, specifically
for machine learning and deep learning.

Its flexible architecture enables to easily deploy computation across a variety of platforms (CPUs,
GPUs, and TPUs), as well as mobile and edge devices, desktops, and clusters of servers. It provides
different tools for developers to easily build and deploy Machine learning solutions.

Google Brain team developed Tensorflow for its internal use and later released it publicly on November 9, 2015, under Apache Open-source Licence 2.0. The stable version of Tensorflow was released in 2017. Open-source nature of the project allows you to use, modify and redistribute the modified version of it a fee without paying anything to Google.

Google uses Tensorflow in the search engine, translation, image captioning or recommendations.

A tensor is a data structure to represent the data. A tensor is a typed multi-dimensional array. Tensors can be zero-dimensional, one-dimensional, two dimensional and 3-dimensional or n-dimensional

Zero-dimensional - Scalar

One-dimensionlal - Vector

Two-dimensional - Matrix

Three-dimensional - Matrix

N-dimensional - Matrix

TensorFlow represents tensors as n-dimensional arrays of base datatypes. Each element in the tensor
has the same data type and the datatype is always known.

The rank of a tf.Tensor object is its number of dimensions. Also called order or degree or
n-dimension

To get the shape of a tensor use

```
>> tensor.shape
```

- Values inside tf.Tensor can’t be changed, but tf.Variable represents a tensor whose value can be changed by running ops on it.
- tf.Variable exists outside the context of a single session.run call

A constant can be created using
```
a = tf.constant([[1, 2], [3, 4]])
```

Variables are used to store the state of a graph in Tensorflow. They are mutable and can be changed
during the execution. Variables need to initialized while declaring it. The shape of the variable should
be specified during the construction of the graph

Variables can be created using:

```
my_variable = tf.Variable([.5],dtype=tf.float32)
my_variable = tf.get_variable("my_variable", [1, 2, 3])
```

In the future, its value can be changed using tf.assign().

Placeholder is a variable which doesn’t hold a value initially and value to it can be assigned later.
It is a place in memory where values will be stored later on.

- Placeholders are used to feed external data into a Graph
- Placeholder allows values to be assigned later
- The data type of placeholder must be specified during the creation of the placeholder

A graph is a flowchart of operations which you want to perform on your input.
It defines computations. In a Graph, the nodes represent units of computation and the
edges represent the data consumed
or produced by a computation. Disclaimer: It doesn’t hold any values.

Also, you can work with multiple graphs in Tensorflow. You just need to create multiple graphs and each
graph will have its own session.

- Parallelism
- Distributed execution
- Compilation
- Portability

A session allows us to execute operations specified in a data flow graph. We can execute the whole graph
or subpart of the graph.

A session does these two things:

- It allocates resources
- Stores the actual values of intermediate results

Default in-process session can be created like:
```
with tf.Session() as sess:
# Perform operations here
```

A remote session can be created by: ``` with tf.Session("grpc://example.org:2222") # Perform operations here ```

A remote session can be created by: ``` with tf.Session("grpc://example.org:2222") # Perform operations here ```

```
Add two tensors of the same type, x + y
>> tf.add(x, y)
Subtract tensors of the same type, x — y
>> tf.sub(x, y)
Multiply two tensors element-wise
>> tf.mul(x, y)
Take the element-wise power of x to y
>> tf.pow(x, y)
Equivalent to pow(e, x), where e is Euler’s number (2.718…)
>> tf.exp(x)
```

```
Equivalent to pow(x, 0.5)
>> tf.sqrt(x)
Take the element-wise division of x and y
>> tf.div(x, y)
Same as tf.div, except casts the arguments as a float
>> tf.truediv(x, y)
Same as truediv, except rounds down the final answer into an integer
>> tf.floordiv(x, y)
Takes the element-wise remainder from division
>> tf.mod(x, y)
```

```
# Create a constant
x = tf.constant([[37.0, -23.0], [1.0, 4.0]])
# Create a variable w which will be mutable
w = tf.Variable(tf.random_uniform([2, 2]))
# Create computation graph
y = tf.matmul(x, w)
output = tf.nn.softmax(y)
init_op = w.initializer
with tf.Session() as sess:
# Run the initializer on `w`
sess.run(init_op)
# Evaluate `output`. `sess.run(output)` will return a NumPy array containing
# the result of the computation.
print(sess.run(output))
# Evaluate `y` and `output`. Note that `y` will only be computed once, and its
# result used both to return `y_val` and as an input to the `tf.nn.softmax()`
# op. Both `y_val` and `output_val` will be NumPy arrays.
y_val, output_val = sess.run([y, output])
```

The graph visualizer is a component of TensorBoard that renders the structure of your graph visually in
a browser.

Graph can se saved for visualization using:

```
with tf.Session() as sess:
writer = tf.summary.FileWriter("/tmp/log/...", sess.graph)
```

To see the graph, start tensorboard server and navigate to graphs in the browser.
“TensorFlow's eager execution is an imperative programming environment that evaluates operations immediately, without building graphs: operations return concrete values instead of constructing a computational graph to run later”

```
tf.enable_eager_execution()
```

In version 2.0 and above, eager execution is enabled by default.