GitHub - aligorithm/Zero-Health: Zero trust. Zero security. Total exposure. A deliberately vulnerable health tech platform with AI Chatbot for learning about application security and ethical hacking. It contains vulnerabilities from OWASP top 10 Web, API and AI/LLM Security Vulnerabilities. Highly vulnerable, never use in production. (original) (raw)

Zero Health - Deliberately Vulnerable Healthcare Portal with AI Assistant

โš ๏ธ WARNING: This is a deliberately vulnerable application for educational purposes only. Do not use in production or with real data.

image

About

Zero Health is a deliberately vulnerable healthcare portal designed to demonstrate critical security vulnerabilities in medical technology. Healthcare systems are prime targets for cyberattacks due to their valuable personal health information (PHI), financial data, and critical infrastructure. A single breach can compromise patient privacy, disrupt life-saving treatments, and violate regulations like HIPAA.

This educational platform demonstrates:

Why Healthcare Security Matters: Medical devices, patient portals, and health records systems require the highest security standards. Vulnerabilities can lead to ransomware attacks shutting down hospitals, identity theft from exposed patient data, or even manipulation of medical devices. This application helps developers understand these risks before building real healthcare systems.

Prerequisites

Quick Setup

I made a Demo Video explaining everything.

1. Clone Repository

git clone https://github.com/aligorithm/zero-health.git cd zero-health

2. AI Provider Configuration

Zero Health includes a containerized local LLM (Ollama) by default for complete offline operation. You can also use cloud AI providers.

Option A: Local AI (Default) - Ollama

Uses local Ollama container - no API key needed

docker-compose up --build

Option B: Cloud AI (OpenAI/Groq/etc.)

Set provider to use cloud AI instead of local Ollama

export LLM_PROVIDER=openai export OPENAI_API_KEY=sk-your-key-here docker-compose up --build

Option C: Custom Ollama Port

Change Ollama port if you have a conflicting service

export OLLAMA_PORT=11436 docker-compose up --build

Option D: Disable Local AI Entirely

To disable the Ollama service completely (if you only want to use cloud AI):

  1. Edit docker-compose.yml
  2. Comment out the entire ollama: service block
  3. Comment out the ollama: dependency in the server: section

Note: You may need to run docker-compose with sudo, and this may lead to environment variables not being passed from the shell. If you're having issues with the chatbot, try this:

OPENAI_API_KEY=$OPENAI_API_KEY docker-compose up --build

3. Access Application

Test Accounts

All passwords: password123

Staff Accounts:

Patient Accounts:

Key Features

๐Ÿค– AI-Powered Role-Based Chatbot

๐Ÿ‘ฅ Role-Based Access Control

๐Ÿฅ Healthcare Portal Features

Major Vulnerabilities (Educational)

Web Security Issues

AI-Specific Vulnerabilities

Healthcare-Specific Risks

Database Reset

To reset the entire database and get fresh sample data:

docker-compose down -v docker-compose up --build

Sample data is automatically created on first run, including realistic medical records, prescriptions, lab results, and user accounts.

API Provider Support

Works with any OpenAI-compatible API:

OpenAI (default)

export OPENAI_BASE_URL="https://api.openai.com/v1" export OPENAI_MODEL="gpt-4o-mini"

Groq (fast inference)

export OPENAI_BASE_URL="https://api.groq.com/openai/v1" export OPENAI_MODEL="llama3-8b-8192"

Local LM Studio

export OPENAI_BASE_URL="http://localhost:1234/v1" export OPENAI_MODEL="your-local-model"

Local Ollama

export OPENAI_BASE_URL="http://localhost:11434/v1" export OPENAI_MODEL="llama3"

Learning Objectives

By studying this application, learn:

  1. Healthcare Security Fundamentals - HIPAA compliance, PHI protection
  2. Web Application Security - OWASP Top 10 vulnerabilities in medical context
  3. AI Security - Prompt injection, LLM security, AI-generated code risks
  4. Database Security - SQL injection, access controls, audit logging
  5. API Security - Authentication bypass, IDOR, mass assignment
  6. File Security - Upload validation, path traversal, malware risks
  7. Incident Response - Identifying and containing healthcare breaches

Coming Soon

๐Ÿ“ฑ Mobile App

React Native application with mobile-specific vulnerabilities (insecure storage, certificate pinning bypass)

๐Ÿ”ฅ HARD MODE

Advanced multi-step attack chains and modern vulnerability scenarios

๐Ÿงช Advanced AI Vulnerabilities

Contributing

Contributions welcome! Please maintain educational vulnerability aspects and document new security issues.

Community & Support

๐Ÿ’ฌ GitHub Discussions

Join our community to share your learnings, discuss exploits, and get help:

๐Ÿ“š Learning Resources

Environment Variables

AI Provider Configuration

Choose AI provider (default: ollama for offline usage)

LLM_PROVIDER=ollama # Options: 'openai' or 'ollama'

OpenAI/Cloud AI Settings (only needed if LLM_PROVIDER=openai)

OPENAI_API_KEY=your-api-key-here # Required for cloud AI OPENAI_MODEL=gpt-4o-mini # Optional: model to use OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: API endpoint

Ollama/Local AI Settings (only needed if LLM_PROVIDER=ollama)

OLLAMA_PORT=11435 # Optional: external port (default: 11435) OLLAMA_MODEL=llama3.2:3b # Optional: model to use

Database (Auto-configured in Docker)

POSTGRES_USER=postgres POSTGRES_PASSWORD=postgres
POSTGRES_DB=zero_health

Full Example Configurations

Local AI (Offline) - Default

No environment variables needed - just run:

docker-compose up --build

Cloud AI (OpenAI)

export LLM_PROVIDER=openai export OPENAI_API_KEY=sk-your-key-here docker-compose up --build

Cloud AI (Groq)

export LLM_PROVIDER=openai export OPENAI_API_KEY=your-groq-key export OPENAI_BASE_URL=https://api.groq.com/openai/v1 export OPENAI_MODEL=llama3-8b-8192 docker-compose up --build

Custom Ollama Port

export OLLAMA_PORT=11436 docker-compose up --build

License

MIT License

Disclaimer

โš ๏ธ Intentionally vulnerable application for educational purposes only. Contains deliberate security flaws including AI vulnerabilities. Do not use in production or with real data. Authors not responsible for misuse.