NeuroFlux

🌟 NeuroFlux - IA Microscopique Révolutionnaire

Powered by Quantum Fluctuations GitHub Repo License Size Maintained by @kabir308

🌍 Manifeste NeuroFlux

En Français

NeuroFlux est une révolution technologique qui repousse les limites de l’IA embarquée. Notre vision : une IA qui s’auto-optimise, s’auto-répare et s’adapte à tout environnement, de la puce Raspberry Pi au satellite ESP32.

In English

NeuroFlux is a technological revolution pushing the boundaries of embedded AI. Our vision: an AI that self-optimizes, self-repairs, and adapts to any environment, from Raspberry Pi to ESP32 satellites.

🚀 Phase 1: Guérilla d’Optimisation

Nano-Modèles Quantiques

In English

Quantum Nano-Models

🌌 Phase 2: Conquête Interstellaire

Hardware Hacking

In English

Hardware Hacking

🔥 Phase 3: Manifeste Open Source

GitHub Repo

In English

GitHub Repo

The NeuroFlux project includes an interactive web application to showcase some of the AI models and concepts explored. This gallery provides a user-friendly interface to explore, understand, and interact with these models.

Features

  1. Ensure you have Python installed (Python 3.8+ recommended).
  2. Clone this repository (if you haven’t already).
  3. Navigate to the project root directory.
  4. Install the required Python packages from the main requirements.txt file. It’s recommended to do this within a virtual environment:
    python -m venv venv
    source venv/bin/activate  # On Linux/macOS
    # venv\Scripts\activate    # On Windows
    pip install -r requirements.txt
    

    Note: This step requires sufficient disk space, as libraries like PyTorch, Transformers, and TensorFlow can be large. This now includes tensorflow-lite for the EfficientNet demo, and pyncnn with opencv-python-headless for the NCNN object detection demo.

  5. (For LLM Demo Only) Set up Local LLM Server & Configure Endpoint:
    • The “Interactive LLM Demo” requires you to run a separate local LLM server (like text-generation-webui with Oobabooga, or Ollama with its OpenAI compatible API).
    • Ensure your chosen LLM server is running and exposes an OpenAI-compatible API endpoint (e.g., http://localhost:5000/v1 or http://localhost:11434/v1).
    • Open the webapp/llm_config.py file.
    • Modify the LLM_API_ENDPOINT variable to match the URL of your local LLM server.
  6. (For EfficientNet-Lite4 Demo) Download TFLite Model:
    • The EfficientNet-Lite4 demo requires a TFLite model file. Due to potential download issues from direct links, you may need to manually download efficientnet_lite4_classification_2.tflite and place it in the webapp/models/tflite/ directory. A placeholder file (efficientnet_lite4_classification_2.tflite.PLEASE_DOWNLOAD_MANUALLY) is created by default if the automatic download fails. You can search for “EfficientNet-Lite4 classification tflite” on TensorFlow Hub to find a download source.
  7. (For NanoDet-Plus NCNN Demo) Setup NCNN and Model Files:
    • The NanoDet-Plus demo uses pyncnn, a Python wrapper for the NCNN library. pyncnn might require the NCNN C++ library to be built and installed on your system first, if pip install pyncnn encounters issues. Refer to official NCNN and pyncnn documentation for detailed NCNN installation.
    • The NCNN model files (nanodet-plus-m_320.param and nanodet-plus-m_320.bin) are attempted to be downloaded automatically. If this fails, or you wish to use a different NCNN model, place your .param and .bin files in the webapp/models/ncnn/ directory. Placeholder files (*.PLEASE_DOWNLOAD_MANUALLY) might be created if the automatic download fails.
  8. Navigate to the web application directory:
    cd webapp
    
  9. Run the Flask application:
    python app.py
    

    The application will start, and by default, it should be accessible at http://127.0.0.1:5000 in your web browser.

  10. Open your web browser and go to http://127.0.0.1:5000.

Manual Testing for the Web Application

To ensure the web application is functioning correctly, perform the following manual checks:

On-Device Inference (WebAssembly)

The NeuroFlux project explores the possibility of on-device inference by compiling models to WebAssembly (Wasm). A script for this purpose, wasm/tinybert_compile.py, is included to demonstrate the compilation of TinyBERT to Wasm using ONNX and Apache TVM.

Current Status:

🛠️ Installation

# 1. Cloner le repo
$ git clone https://github.com/neuroflux/neuroflux.git

# 2. Créer un environnement virtuel
$ python -m venv venv
$ source venv/bin/activate  # Linux/Mac
$ venv\Scripts\activate    # Windows

# 3. Installer les dépendances
$ pip install -r requirements.txt

# 4. Compiler TinyBERT
$ python wasm/tinybert_compile.py

# 5. Lancer les tests
$ pytest tests/

# 6. Exécuter avec Docker (optionnel)
$ docker build -t neuroflux .
$ docker run -it --rm neuroflux:latest

📝 Documentation

🤝 Communauté

Rejoignez-nous sur les meetups IoT avec nos T-shirts “Mon IA tient dans 100 Ko” !

đź“„ License

Ce projet est sous licence Apache 2.0. Consultez le fichier LICENSE pour plus de détails.

🙏 Remerciements

Merci à tous les contributeurs et à la communauté open source pour leur soutien !

Maintenu par @kabir308