Tools
Brewing Neural Networks with TensorFlow: A Coffee Example for Beginners
2025-12-23
0 views
admin
🛠 What is TensorFlow? ## 🧠 What is a Neural Network? ## ☕ The Coffee Example ## Step 1: Install TensorFlow ## Step 2: Import Libraries ## Step 3: Prepare Data ## Step 4: Normalizing Data ## Step 5: Build the Neural Network ## Step 6: Compile the Model ## Step 7: Train the Model ## What happens during training? ## What are epochs? ## What are batches? ## Step 8: Test Predictions ## 🔍 Converting Probabilities to Decisions ## 📊 Text‑Based Diagram ## 📝 Viewing the Model Architecture ## 🎯 Wrapping Up ## 🚀 Next Steps Machine learning can feel intimidating if you’re starting from zero. But let’s make it fun: imagine you’re a barista predicting what coffee a customer wants. We’ll use TensorFlow to build a simple neural network that learns these patterns. TensorFlow is an open‑source library created by Google. Think of it as a toolbox that helps us build and train neural networks. Instead of writing rules manually, we give TensorFlow examples, and it figures out the rules itself. A neural network is inspired by how our brain works. It has: We’ll predict coffee choice based on multiple inputs: Neural networks work best when inputs are scaled to a similar range. For example, sleepiness (0–10) and weather (0/1) are very different scales. We normalize values between 0 and 1: You can even inspect them: This shows the actual numbers the network has learned. The model outputs probabilities, e.g.: to pick the index of the highest probability → Espresso. TensorFlow can print the model’s structure with: This shows each layer, its size, and how many parameters (weights + biases) it has. You just built your first neural network with TensorFlow! Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse COMMAND_BLOCK:
pip install tensorflow Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
pip install tensorflow COMMAND_BLOCK:
pip install tensorflow CODE_BLOCK:
import tensorflow as tf
from tensorflow import keras
import numpy as np Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
import tensorflow as tf
from tensorflow import keras
import numpy as np CODE_BLOCK:
import tensorflow as tf
from tensorflow import keras
import numpy as np COMMAND_BLOCK:
# Inputs: [sleepiness, time_of_day, stress, weather]
X = np.array([ [9, 2, 7, 0], # sleepy, morning, stressed, cold → espresso [3, 8, 2, 1], # relaxed, night, low stress, hot → latte [6, 5, 5, 0], # medium sleepy, afternoon, medium stress, cold → black coffee
]) # Outputs: espresso=0, latte=1, black=2
y = np.array([0, 1, 2]) Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
# Inputs: [sleepiness, time_of_day, stress, weather]
X = np.array([ [9, 2, 7, 0], # sleepy, morning, stressed, cold → espresso [3, 8, 2, 1], # relaxed, night, low stress, hot → latte [6, 5, 5, 0], # medium sleepy, afternoon, medium stress, cold → black coffee
]) # Outputs: espresso=0, latte=1, black=2
y = np.array([0, 1, 2]) COMMAND_BLOCK:
# Inputs: [sleepiness, time_of_day, stress, weather]
X = np.array([ [9, 2, 7, 0], # sleepy, morning, stressed, cold → espresso [3, 8, 2, 1], # relaxed, night, low stress, hot → latte [6, 5, 5, 0], # medium sleepy, afternoon, medium stress, cold → black coffee
]) # Outputs: espresso=0, latte=1, black=2
y = np.array([0, 1, 2]) CODE_BLOCK:
X = X / np.max(X, axis=0) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
X = X / np.max(X, axis=0) CODE_BLOCK:
X = X / np.max(X, axis=0) COMMAND_BLOCK:
model = keras.Sequential([ keras.layers.Dense(8, activation='relu'), # hidden layer keras.layers.Dense(8, activation='relu'), # another hidden layer keras.layers.Dense(3, activation='softmax') # output layer
]) Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
model = keras.Sequential([ keras.layers.Dense(8, activation='relu'), # hidden layer keras.layers.Dense(8, activation='relu'), # another hidden layer keras.layers.Dense(3, activation='softmax') # output layer
]) COMMAND_BLOCK:
model = keras.Sequential([ keras.layers.Dense(8, activation='relu'), # hidden layer keras.layers.Dense(8, activation='relu'), # another hidden layer keras.layers.Dense(3, activation='softmax') # output layer
]) CODE_BLOCK:
model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']
) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']
) CODE_BLOCK:
model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']
) CODE_BLOCK:
model.fit(X, y, epochs=100, batch_size=2) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
model.fit(X, y, epochs=100, batch_size=2) CODE_BLOCK:
model.fit(X, y, epochs=100, batch_size=2) CODE_BLOCK:
for layer in model.layers: weights, biases = layer.get_weights() print("Weights:", weights) print("Biases:", biases) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
for layer in model.layers: weights, biases = layer.get_weights() print("Weights:", weights) print("Biases:", biases) CODE_BLOCK:
for layer in model.layers: weights, biases = layer.get_weights() print("Weights:", weights) print("Biases:", biases) COMMAND_BLOCK:
test = np.array([[8, 1, 6, 0]]) / np.max(X, axis=0) # normalize test input
prediction = model.predict(test)
coffee_type = np.argmax(prediction) coffee_names = ["Espresso", "Latte", "Black Coffee"]
print("Suggested coffee:", coffee_names[coffee_type]) Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
test = np.array([[8, 1, 6, 0]]) / np.max(X, axis=0) # normalize test input
prediction = model.predict(test)
coffee_type = np.argmax(prediction) coffee_names = ["Espresso", "Latte", "Black Coffee"]
print("Suggested coffee:", coffee_names[coffee_type]) COMMAND_BLOCK:
test = np.array([[8, 1, 6, 0]]) / np.max(X, axis=0) # normalize test input
prediction = model.predict(test)
coffee_type = np.argmax(prediction) coffee_names = ["Espresso", "Latte", "Black Coffee"]
print("Suggested coffee:", coffee_names[coffee_type]) CODE_BLOCK:
prediction = [[0.7, 0.2, 0.1]] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
prediction = [[0.7, 0.2, 0.1]] CODE_BLOCK:
prediction = [[0.7, 0.2, 0.1]] CODE_BLOCK:
np.argmax(prediction) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
np.argmax(prediction) CODE_BLOCK:
np.argmax(prediction) CODE_BLOCK:
Inputs: [Sleepiness, Time of Day, Stress, Weather] ↓ [Hidden Layer 1: 8 neurons] ↓ [Hidden Layer 2: 8 neurons] ↓
Outputs: [Espresso, Latte, Black Coffee] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Inputs: [Sleepiness, Time of Day, Stress, Weather] ↓ [Hidden Layer 1: 8 neurons] ↓ [Hidden Layer 2: 8 neurons] ↓
Outputs: [Espresso, Latte, Black Coffee] CODE_BLOCK:
Inputs: [Sleepiness, Time of Day, Stress, Weather] ↓ [Hidden Layer 1: 8 neurons] ↓ [Hidden Layer 2: 8 neurons] ↓
Outputs: [Espresso, Latte, Black Coffee] CODE_BLOCK:
model.summary() Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
model.summary() CODE_BLOCK:
model.summary() CODE_BLOCK:
Model: "sequential"
_________________________________________________________________ Layer (type) Output Shape Param #
================================================================= dense (Dense) (None, 8) 40 dense_1 (Dense) (None, 8) 72 dense_2 (Dense) (None, 3) 27
=================================================================
Total params: 139
Trainable params: 139
Non-trainable params: 0
_________________________________________________________________ Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Model: "sequential"
_________________________________________________________________ Layer (type) Output Shape Param #
================================================================= dense (Dense) (None, 8) 40 dense_1 (Dense) (None, 8) 72 dense_2 (Dense) (None, 3) 27
=================================================================
Total params: 139
Trainable params: 139
Non-trainable params: 0
_________________________________________________________________ CODE_BLOCK:
Model: "sequential"
_________________________________________________________________ Layer (type) Output Shape Param #
================================================================= dense (Dense) (None, 8) 40 dense_1 (Dense) (None, 8) 72 dense_2 (Dense) (None, 3) 27
=================================================================
Total params: 139
Trainable params: 139
Non-trainable params: 0
_________________________________________________________________ - Inputs → information we feed in (like sleepiness, time of day, stress level).
- Hidden layers → where the “thinking” happens.
- Outputs → the prediction (espresso, latte, or black coffee). - Sleepiness level (0–10)
- Time of day (0–10)
- Stress level (0–10)
- Weather (0 = cold, 1 = hot) - Espresso = 0
- Black Coffee = 2 - The model starts with random weights and biases. Weights are numbers that decide how strongly each input affects a neuron. Biases shift the output up or down.
- Weights are numbers that decide how strongly each input affects a neuron.
- Biases shift the output up or down.
- During each epoch, TensorFlow adjusts these weights and biases to reduce errors.
- Over time, the network learns the right “recipe” for predicting coffee choices. - Weights are numbers that decide how strongly each input affects a neuron.
- Biases shift the output up or down. - An epoch = one full pass through the training data.
- If you have 100 samples and train for 10 epochs, the model sees all 100 samples 10 times. - Instead of feeding all data at once, we split it into batches.
- Example: batch size = 2 → the model sees 2 samples at a time before updating weights.
- This makes training faster and more memory‑efficient. - Espresso: 70%
- Black Coffee: 10% - Inputs = customer mood, time, stress, weather
- Hidden layers = brain thinking
- Output = coffee choice
- Normalization = scaling inputs for better learning
- Epochs & batches = how training is structured
- Weights & biases = what the model learns
- model.summary() = quick view of architecture - Add more inputs (like age, budget, or favorite flavors).
- Try different activation functions (sigmoid, tanh).
- Experiment with optimizers (SGD, RMSprop).
- Collect larger datasets for better accuracy.
how-totutorialguidedev.toaimachine learningneural networktensorflownetwork