this is CDI.

Machine Learning | Embedded Systems

Congenital Disorder Identifier employing computer vision to detect congenital diseases in resource-constrained areas, showcasing self-contained operation and local model training.


Overview

The project focuses on the development of an embedded ML system, termed the "Congenital Disorder Identifier, Embedded Camera." Employing a computer vision ML model (FOMO MobileNetV2 0.1), the system performs complex visual tasks to identify external congenital diseases. Notably, it operates as a self-contained, portable device showcasing machine learning at the edge, eliminating the need for server connectivity. This design addresses the diagnostic gap in medical care in areas lacking network infrastructure, providing a fully-fledged ML solution for resource-constrained environments. The device conducts continuous data collection, model training, and deployment locally, emphasizing its autonomy and suitability for regions with limited network access. The project aims to bridge healthcare disparities by enabling on-site, advanced diagnostic capabilities in underserved areas.

Media



Description

The "Congenital Disorder Identifier, Embedded Camera" is a self-contained, portable ML device for computer vision, enabling the identification of external congenital diseases without reliance on server connectivity.

  1. Neural Networks

    • Neural networks operate by leveraging interconnected layers that mimic the human brain's neural connections. These layers, organized in a hierarchical fashion, analyze input data in a series of transformations to extract intricate patterns and features. For object classification specifically, the neural network processes visual data and learns to recognize distinctive features associated with different objects.
    • The process involves an initial input layer receiving raw pixel data, which then passes through multiple hidden layers. Each hidden layer contains neurons that apply mathematical operations to the input, gradually transforming it into a representation that highlights relevant features. These features become increasingly abstract as they move through the layers.
    • During training, the neural network learns to adjust the weights and biases of its connections by comparing its output with the ground truth (correct classification). This iterative learning process, often facilitated by optimization algorithms, refines the network's internal parameters to improve its ability to accurately classify objects.
    • In the case of object classification, the final output layer produces a probability distribution across different classes, indicating the likelihood of the input image belonging to each class. The class with the highest probability is then assigned as the predicted label for the object.
    • The neural network's capability to learn and adapt from data allows it to excel at object recognition and classification tasks, making it a key technology for creating powerful visual ML diagnostic systems, especially when deployed at the edge in resource-constrained environments using tiny neural network architectures.
  2. Models

    • The project shifts focus to deploying on an edge device, emphasizing the importance of Tiny Machine Learning (Tiny ML). TensorFlow Lite, tailored for edge computing, is crucial for implementing machine learning on our embedded camera system due to its efficient model execution and reduced memory footprint.
    • To achieve real-time, efficient object detection, we integrated the FOMO MobileNetV2 0.1 architecture, designed for rapid object detection with quick inference and accurate identification.
    • The software implementation concludes by embedding the trained TensorFlow Lite model into the edge device's firmware, empowering it with real-time object detection and classification capabilities.
  3. Case Studies

    • The purpose of these case studies is to validate the functionality of the embedded ML system, demonstrating its proficiency in real-time object identification, trait-based classification, and the classification of congenital disorders based on external features, while addressing challenges associated with medical data variability and emphasizing the importance of cultural considerations for accurate and adaptable diagnostic capabilities in diverse contexts.

    • Case Study I: Foundational Object Classification
      • Objective: Validate system flow and edge device capability in real-time object identification.
        Case Study: Identifying varieties of fruits (Apples, Bananas, Grapes).
        Significance: Demonstrates the edge device's proficiency in executing an ML model for basic object classification, laying the groundwork for subsequent studies.


    • Case Study II: Trait-Based Banana Classification
      • Objective: Transition from object identification to trait-based classification, simulating diagnostic processes.
        Case Study: Classifying Bananas based on external features indicative of developmental stage.
        Significance: Illustrates how a TinyML model can detect variations in a common entity, setting the stage for addressing more complex diagnostic challenges in the Congenital Disorder Classification case study.


    • Case Study III: Congenital Disorder Classification
      • Objective: Classify congenital disorders (Syndactyly, Cleft Lip) based on external features.
        Case Study: Examining complexities in anomaly detection in human anatomy using diverse medical data.
        Significance: Highlights challenges in real-world medical data (variations in lighting, quality) and introduces the importance of cultural considerations. Emphasizes the system's adaptability and sensitivity to both medical and cultural nuances for accurate diagnosis in diverse contexts.