First, set up CUDA:
In my case I downloaded it from
Selected the precise model for you.
In my case, I select the choices proven under:
After deciding on the choices that suit your pc, on the backside of the web page we get the instructions that we have to run from the terminal. In my case, I executed the next within the terminal:
wget https://developer.obtain.nvidia.com/compute/cuda/12.1.0/local_installers/cuda_12.1.0_530.30.02_linux.runsudo
sh cuda_12.1.0_530.30.02_linux.run
You need to see the next sequence of screens
Observe that I didn’t select to put in the Driver!
If the set up was profitable, you will notice one thing much like
Go to https://pytorch.org/get-started/locally/
You may be offered these display (on the time of scripting this put up, that is what we see)
MAKE positive that you choose the CUDA model that you just put in within the earlier step. In my case I put in CUDA 12.1 within the earlier step, so I choose CUDA12.1 when putting in Pytorch. Run the command that seems on the backside. In my case, you see that I have to run the next:
pip3 set up torch torchvision torchaudio
I executed this command within the terminal.
Whereas the set up was happening I bought this error:
ERROR: Package deal ‘networkx’ requires a unique Python: 3.8.10 not in ‘>=3.9’
So I put in a unique networkx bundle that’s supported with my present system setup. To put in the totally different (slighly older) networkx bundle, I executed the next within the terminal:
pip set up networkx==3.1
Then I attempted the set up once more with
pip3 set up torch torchvision torchaudio --index-url https://obtain.pytorch.org/whl/cu118
This time all labored!
To confirm the Pytorch set up, we are able to run the next within the terminal. First begin a Python session by typing “python” within the terminal. You need to see one thing much like
Python 3.8.10 (default, Nov 22 2023, 10:22:35)
[GCC 9.4.0] on linux
Kind "assist", "copyright", "credit" or "license" for extra data.python
Then execute the next
import torch
x = torch.rand(5, 3)
print(x)
The output needs to be much like
tensor([[0.5924, 0.3590, 0.2785],
[0.9726, 0.3256, 0.2957],
[0.4339, 0.9546, 0.7809],
[0.9702, 0.9863, 0.8537],
[0.1784, 0.2245, 0.5970]])
Now, to verify if CUDA is offered, do
import torch
torch.cuda.is_available()
If CUDA is there you must see “True” within the terminal.
Altogether the verification steps in my terminal seem like the next:
At this level you must be capable of run a machine studying software regionally.
# Importing essential libraries
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns# Setting random seed for reproducibility
torch.manual_seed(42)
# Loading the Iris dataset
iris = load_iris()
X, y = iris.information, iris.goal
# Splitting the info into coaching and testing units
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardizing the options
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.remodel(X_test)
# Convert NumPy arrays to PyTorch tensors
X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.lengthy)
y_test = torch.tensor(y_test, dtype=torch.lengthy)
# Outline a easy neural community mannequin
class SimpleNN(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
tremendous(SimpleNN, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def ahead(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
# Parameters
input_size = X_train.form[1]
hidden_size = 128
num_classes = len(np.distinctive(y_train))
num_epochs = 100
learning_rate = 0.001
# Initialize the mannequin
mannequin = SimpleNN(input_size, hidden_size, num_classes)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(mannequin.parameters(), lr=learning_rate)
# Lists to retailer the coaching loss over epochs
train_losses = []
# Coaching the mannequin
for epoch in vary(num_epochs):
# Ahead cross
outputs = mannequin(X_train)
loss = criterion(outputs, y_train)
# Backward cross and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_losses.append(loss.merchandise())
if (epoch+1) % 10 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.merchandise():.4f}')
# Plotting the coaching loss over epochs
plt.plot(train_losses, label='Coaching Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Coaching Loss Over Epochs')
plt.legend()
plt.present()
# Evaluating the mannequin
with torch.no_grad():
outputs = mannequin(X_test)
_, predicted = torch.max(outputs, 1)
accuracy = accuracy_score(y_test.numpy(), predicted.numpy())
print('Accuracy:', accuracy)
# Making a confusion matrix
cm = confusion_matrix(y_test.numpy(), predicted.numpy())
plt.determine(figsize=(8, 6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=iris.target_names, yticklabels=iris.target_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.present()
The anticipated consequence of this algorithm is a educated neural community mannequin that may precisely classify situations of the Iris dataset into one of many three courses (Setosa, Versicolor, or Virginica).
After coaching, the mannequin is evaluated on the testing set to evaluate its efficiency on unseen information. The analysis metric used on this instance is accuracy, which measures the proportion of accurately labeled situations within the testing set.
After operating this instance you will notice the next plots:
This confusion matrix reveals that for the code above, the community predicts the courses precisely, aside from one instance (the cell with the number one). This #1 within the cell (2,3) implies that for one of many examples, the neural community predicted that the category was Virginica however the right class was Versicolor.
General, the neural community solely made one mistake.
This final plot merely reveals that because the neural goes over the examples, the coaching is steadily having higher predictions (decrease price, higher community responses-not usually, however for the needs of this text we are able to settle right here).
Good luck!