Consider the pattern “direct creation from data”. If you have a python list, how can you create a tensor with it using pytorch?
Write the code in python. Consider imports and explain the output.
Use the following code to implement:
This is a 2D list.
python_list = [[1,2,3],[4,5,6]]
import torch
python_list = [[1,2,3],[4,5,6]]
my_tensor = torch.tensor(python_list)
print(my_tensor )
output:
tensor([[1,2,3],[4,5,6]])
Consider the pattern “Creation from a desired shape”.
Write the code in python. Consider imports and typing in python and explain the output.
Create a tensor of random number, zeros, and ones in pytorch tensor based on the shape given below:
Example shape:
shape = (2,3)
tensor of zeros with the shape=(2,3)
import torch
shape: tuple = (2,3)
tensor of ones with the shape=(2,3)
ones = torch.ones(shape)
zeros = torch.zeros(shape)
# tensor of random numbers with the shape=(2,3)
random = torch.randn(shape)
Outputs:
print(f”ones Tensor: \n{ones }”)
tensor([[1,1,1],[1,1,1])
print(f”ZerosTensor: \n{ones }”)
tensor([[0,0,0],[0,0,0])
print(f”Random Tensor: \n{ones }”)
tensor([[ 0.8521, 1.1576, 0.0318],
[-0.7269, -0.6659, -1.7225]])
Consider the pattern “Creation by mimicking another tensor”.
Write the code in python. Consider imports and typing in python and explain the output.
Q: Lets say you need to create a tensor with the EXACT same SHAPE and TYPE as the one that is currently made. How would you do this in pytorch?
Q: additionally, how would you change the datatype? how can you do ints?
Use the example tensor below to implement the code:
current_tensor = torch.tensor([[1,2],[3,4]])
SHAPE and TYPE.
This is how we create a tensor with the same
current_tensor = torch.tensor([[1,2],[3,4]])
same_tensor_rand_num = torch.randn_like(current_tensor, dtype=torch.float)
If you want random ints
same_tensor_rand_num = same_tensor_rand_num = torch.randint_like(current_tensor, high=10)
Outputs:
print(current_tensor)
tensor([[1,2],[3,4]])
print(same_tensor_rand_num)
tensor([[ 0.8521, 1.1576,],[ 0.9521, 1.3576,]])
In pytorch what are the three critical attributes in a tensor?
Consider showing this through code and print them out.
Explain their purpose, and additional details related to the attributes.
consider this tensor:
tensor = torch.randn(2,3)
tensor = torch.randn(2,3)
print(f”Shape: {tensor.shape}”)
print(f”Datatype: {tensor.dtype}”)
print(f”Device: {tensor.device}”)
Outputs:
Shape: torch.Size([2,3])
Purpose of Shape: describe the shape of the tensor and also becomes your number 1 debugging tool because most of the errors usually come from shape mismatches.
Datatype: torch.float32
Details of Datatype: the data type of the numbers. defaults to float32.
Device: cpu
Details related to Device: Tells where the tensor lives. the options are cpu and cuda(Gpu)
In pytorch, what is the default dtype and why is it that dtype?
The default is float32, this is not an accident. This is on purpose because of gradients. The entire engine of deep learning is to make tiny continuous adjustments to a models weights, specifically in the domain of float32.
What must be a float and what is okay not to be a float when it comes to data feeding into a deep learning framework like pytorch?
Model parameter(weights, biases) MUST be a float type. float32 is the standard.
Data that represents categories or counts can be integers
How can you tell pytorch a tensor is a learnable parameter(i.e. weights,biases), what must you set in pytorch? Then what does setting that do?
requires_grad=true
This basically says to pytorch AUTOGRAD “this is a learnable parameter, from now on, track every single operation that happens to it.
In python code, show the difference of DATA tensor and PARAMETER tensor.
output how to differentiate one over the other.
data = torch.tensor([[1,2],[3,4]])
This is a parameter, therefore we need gradients
weights = torch.tensor([[1.0],[2.0]], requires_grad=True)
Outputs:
print(data .requires_grad) = False
print(weights .requires_grad) = True
Write out the following function in pytorch:
z = x*y
y=a+b
When writing this function out, you must do so that it creates a graph in pytorch – what creates the graph?.
Then show the proof to show that this graph was created.
What does that proof say? What doesnt have an operation?
a = torch.tensor(2.0, required_grad=True)
b = torch.tensor(3.0, required_grad=True)
x = torch.tensor(4.0, required_grad=True)
what creates the graph is required_grad=True.
first operation
# this creates a new node in pytorch grab which connects a and b to an add operation.
y = a+b
this connect x and y through a multiplication operation
z = x*y
.grad_fn is an operation to any tensor to see if it was CREATED BY AN OPERATION
print(z.grad_fn)
# <mulbackward0> which means the z tensor had an operation of multiplication</mulbackward0>
print(y.grad_fn)
# <addbackward0> which means that y had a addition operation</addbackward0>
print(a.grad_fn) # which returns None because it does not have an operation
Explain how element-wise multiplication works in pytorch.
Q1: what is the symbol for element-wise multiplication?
Q2: Explain what element-wise operation will not work
write out the code and explain the operation that happen.
Use the following tensor to answer the question above:
a = torch.tensor([[1,2],[3,4]])
b = torch.tensor([[10,20],[30,40]])
the symbol is *
element wise multiplication will not work
# with different size tensors. must be
# the same size.
Basically you are putting the tensor on top
# of one another and multiplying downwards.
[[110,220],[330,440]]
element_wise_product = a*b
Outputs:
tensor([[10,40],[90,160]])
Explain how how matrix multiplication works in pytorch.
Q1: what is the symbol for matrix multiplication?
Q2: What is the rule of matrix multiplication that is followed?
Write out an example that showcases the matrix multiplication in python using pytorch. explain the shape.
The symbol for matrix multiplication in pytorch is ‘@’
The rule for matrix multiplication is that column of matrix 1 = row of matrix 2, if you are matrix multiplying both matrices mentioned.
The shape is (2 rows, 3 columns)
m1 = torch.tensor([[1,2,3],[4,5,6]])
The shape is (3 rows, 2 columns)
m2 = torch.tensor([[7,8],[9,10],[11,12]])
output:
tensor([[58,64],[139,154]])
Do matric multiplication for the following example:
M1 = [[a,b,c],[d,e,f]]
M2 = [[g,h],[i,j],[k,l]]
M1 @ M1 = ?
?
=
[
[ (ag)+(bi)+(ck) , (ah)+(bj)+(cl) ],
[ (dg)+(ei)+(fk) , (dh)+(ej)+(cfl) ]
]