Wednesday, 3 July 2024

Underfitting overfitting fitting detail

  

 

 

 

Underfitting Model

 

Underfit model:

# Define a more complex neural network model

model = Sequential([

    Dense(64, activation='relu', input_shape=(2,)),

    Dropout(0.5),

    Dense(64, activation='relu'),

    Dropout(0.5),

    Dense(32, activation='relu'),

    Dense(1, activation='sigmoid')

])

 

Learning:

Epoch 98/100

15/15 [==============================] - 0s 5ms/step - loss: 0.7819 - accuracy: 0.5833 - val_loss: 0.6862 - val_accuracy: 0.5000

Epoch 99/100

15/15 [==============================] - 0s 4ms/step - loss: 0.7416 - accuracy: 0.6111 - val_loss: 0.6839 - val_accuracy: 0.5000

Epoch 100/100

15/15 [==============================] - 0s 5ms/step - loss: 0.8744 - accuracy: 0.5278 - val_loss: 0.6917 - val_accuracy: 0.5000

2/2 [==============================] - 0s 0s/step

 

Precision: 0.00

 

output

circlOUT00.png



circleOUT0.png


notes:

Changes made to increase precision:

Increased the number of layers to three hidden layers.

Increased the number of neurons in each layer.

Added dropout layers to prevent overfitting.

Increased the number of epochs to 100.

You can adjust the parameters further if needed, such as the number of layers, neurons, and dropout rates, to see if you can achieve even better performance.

 

Overfitting Model

Overfit Model:

# Define a neural network model with more layers and neurons to overfit

model = Sequential([

    Dense(512, activation='relu', input_shape=(2,)),

    Dense(512, activation='relu'),

    Dense(512, activation='relu'),

    Dense(512, activation='relu'),

    Dense(1, activation='sigmoid')

])

 

Notes

To demonstrate overfitting, we can create a model that is too complex relative to the amount of data and the problem at hand. This can be done  by significantly increasing the number of neurons and layers, removing  dropout and batch normalization, and reducing the training data size.   Overfitting occurs when the model performs very well on the training  data but poorly on the validation/test data.

 

Learning

Epoch 199/200

3/3 [==============================] - 0s 48ms/step - loss: 0.4401 - accuracy: 0.7889 - val_loss: 2.1862 - val_accuracy: 0.6000

Epoch 200/200

3/3 [==============================] - 0s 40ms/step - loss: 0.4413 - accuracy: 0.8000 - val_loss: 2.2906 - val_accuracy: 0.6000

29/29 [==============================] - 0s 4ms/step

 

Precision: 0.70

Accuracy: 0.67

4889/4889 [==============================] - 20s 4ms/step

 

output

circleOUTfinal13overfit.png



circleOUTfinal4overfit.png


Explanation

Model Complexity:

Increased the number of layers and neurons per layer significantly.

Removed dropout and batch normalization to make the model more likely to overfit.

 

Training Data Reduction:

Reduced the training data to 10% to further induce overfitting.

 

Training and Evaluation:

Trained the model for 200 epochs.

Evaluated on the test data and calculated precision and accuracy.

 

Visualization:

Plotted training and validation accuracy/loss to show the overfitting behavior.

Plotted the decision boundary and classifications to visually demonstrate  overfitting.

 

This setup should result in a model that performs very well on the training  data but poorly on the validation and test data, illustrating overfitting.

 

 

Fitting Model

Notes

# Define a neural network model with batch normalization

model = Sequential([

    Dense(128, activation='relu', input_shape=(2,)),

    BatchNormalization(),

    Dropout(0.5),

    Dense(128, activation='relu'),

    BatchNormalization(),

    Dropout(0.5),

    Dense(128, activation='relu'),

    BatchNormalization(),

    Dropout(0.5),

    Dense(128, activation='relu'),

    BatchNormalization(),

    Dropout(0.5),

    Dense(64, activation='relu'),

    Dense(1, activation='sigmoid')

])

 

The results indicate some improvement but also highlight that capturing the  circular decision boundary with the current model might still be  challenging. Let's try a few more strategies to improve the model's  performance:

 

Increase the number of epochs: Train the model for more epochs.

Adjust the learning rate: Sometimes, a different learning rate can help the  model converge better.

 Use a different optimizer: Experiment with different optimizers like RMSprop  or Nadam.

 Batch normalization: Add batch normalization layers to help stabilize training.

 

Learning:

Epoch 399/400

23/23 [==============================] - 0s 10ms/step - loss: 0.4219 - accuracy: 0.8361 - val_loss: 0.4337 - val_accuracy: 0.8500

Epoch 400/400

23/23 [==============================] - 0s 10ms/step - loss: 0.4398 - accuracy: 0.8306 - val_loss: 0.4113 - val_accuracy: 0.8500

7/7 [==============================] - 0s 3ms/step

 

Precision: 0.93

Accuracy: 0.86

4864/4864 [==============================] - 15s 3ms/step

 

circleOUTfinal1.png and 2


Explanation

Batch Normalization:

Added after each dense layer to help stabilize and speed up the training

 process.

 

Optimizer:

Switched to RMSprop, which can sometimes lead to better convergence for

certain problems.

 

Increased Epochs:

Increased to 400 epochs to give the model more time to learn the patterns

 in the data.

 

Precision and Accuracy:

Precision and accuracy are calculated to evaluate the model’s performance.

 

Visualization:

The decision boundary and true/predicted classifications are plotted to visually assess the model’s performance.

 

This approach should improve the model's ability to capture the circular decision boundary and provide better overall classification performance.