Real-Time Systems Lab Group

This summer, I had the opportunity to participate in the University of Houston’s Real-Time Systems Lab Group. This opportunity came about because I took the initiative to email professors at the University of Houston who were conducting research in areas I was interested in. I am extremely grateful to have been chosen by Professor Cheng to be part of this group, and I want to share one of the valuable lessons I learned from the lab experience.

One of the projects we had the option to work on involved analyzing the CIFAR-10 dataset using convolutional neural networks (CNNs). The CIFAR-10 dataset consists of 60,000 32×32 color images that are frequently used for machine learning research. These images are divided into ten classes of objects, including airplanes, automobiles, birds, cats, and trucks.

CNN

CNNs are a type of deep-learning algorithm typically used to analyze visual data. They are composed of a series of layers, with the number of layers varying based on their intended use. Here is an overview of the typical kinds of layers used in these networks:

  • Convolutional layers: These layers are the core of the CNN, detecting features such as edges, textures, and shapes.
  • Pooling layers: These layers reduce the complexity of computations by downsampling, or decreasing the number of pixels in the image. This improves the model’s efficiency and allows it to generalize to new information.
  • Fully connected layers: These layers make predictions based on what was learned from the other layers by connecting them.
Image by Gaby Stein from Pixabay

Working on the Project

I started working on the project by first watching a YouTube tutorial suggested by the professor’s teaching assistant, which focused on analyzing the CIFAR-10 dataset in TensorFlow using CNNs. The goal of the project was to improve the accuracy of the CNNs in classifying the images.

Throughout the project, I learned how to preprocess the CIFAR-10 data, build and train a CNN model, and evaluate its performance. Preprocessing the data involved normalizing the pixel values to a range of zero to one. This was done because the smaller scale can improve the performance and stability of the models. Building the CNN model required selecting appropriate hyperparameters, such as the number of layers, filter sizes, and learning rates. Training the model was an iterative process that involved fine-tuning and optimizing it to achieve better accuracy. This can be done by increasing the number of layers or by changing the model that was used.

An important thing I learned from this project was the necessity of trial and error and attention to detail. Trial and error allowed me to learn from my mistakes and improve upon my model, increasing its accuracy and allowing it to better classify images. Attention to detail was crucial when adjusting my hyperparameters and tweaking the model. Additionally, I learned how to ask for help when needed. If it wasn’t for the help of the professor’s teaching assistant, I wouldn’t have even known where to start.

Image by Pexels from Pixabay

Conclusion

Participating in the University of Houston’s Real-Time Systems Lab was an incredible learning experience. I gained hands-on experience with CNNs and deepened my understanding of machine learning. I am thankful to Professor Cheng and his teaching assistant for their support and guidance.

Sources