top of page

Teaching AI Like Teaching a Child: Biologically-Grounded Medical Image Analysis

  • Writer: Mijail Serruya
    Mijail Serruya
  • Oct 11
  • 19 min read

How hybrid biological-artificial intelligence systems could transform medical AI by learning the way humans do


Part I: The Vision (TL;DR)


Current medical AI learns by pattern-matching across millions of labeled images. But human pathologists and radiologists learn differently—they build internal 3D models of tissue architecture and body anatomy, then recognize disease as deviations from those models. This embodied understanding, refined through years of training, gives human experts robustness that current AI lacks.


The breakthrough proposal: Train AI systems alongside living biological neural tissue—either in animals with implanted brain-computer interfaces or in reconstructed "brains" made of thousands of linked organoids and assembloids. The biological substrate provides:

  • Innate priors evolved over millions of years (object recognition, spatial reasoning, anomaly detection)

  • Developmental scaffolding for building robust internal models

  • Natural learning mechanisms that work with limited training examples

  • Interpretable representations that can be interrogated and exported

As medical images flow through this hybrid system, three learning processes occur simultaneously:

  1. Biological tissue uses evolved visual processing to build representations

  2. AI foundation models learn to predict biological responses and expert feedback

  3. Human experts provide mentorship through the training progression

The result: AI systems that don't just recognize patterns but understand anatomy, think about tissue architecture in 3D, and reason about pathology the way human experts do.


The application: Start with normal histology and anatomy (biological systems respond to structure). Progress through common pathologies (learning deviations). Build to rare diseases (where human-like reasoning is essential). Throughout, maintain exportable 3D anatomical models that both AI and humans can interrogate.

This approach bridges two worlds: the Serruya-Friedman vision of biological-silicon hybrid intelligence, and the urgent need for medical AI that reasons rather than merely recognizes.


Part II: Learning Alongside Biology - Two Architectures for Hybrid Intelligence


The Central Insight: Biology as Living Teacher


For decades, AI researchers have debated whether to hand-code knowledge or let data speak. Deep learning's success suggested pure data-driven approaches win. But current AI's brittleness—adversarial examples, poor generalization, lack of common sense—reveals what's missing: the developmental scaffolding and innate priors that biological brains provide.


Rather than reverse-engineering these priors (which we don't fully understand even in fruit flies), or searching vast architecture spaces hoping to stumble upon them, we can use biology directly—as a living teacher and computational scaffold.


Architecture 1: The Implanted Living System

The Setup:

  • Living animal (mouse, rat, monkey) or consenting human with neurological condition

  • High-density brain-computer interface at key cortical hubs

  • Bidirectional communication: recording neural activity, delivering stimulation

  • Real-time linkage to AI foundation models

  • Visual/sensory stimuli presented to both biological and artificial systems


How It Works:

Recording Loop:

Medical Image → Animal/Human Views It
    ↓
Visual Cortex Processes (using evolved priors)
    ↓
BCI Records Neural Activity
    ↓
AI Learns to Predict Neural Responses
    ↓
AI Learns Representations Grounded in Biology

Stimulation Loop:

AI Makes Prediction/Classification
    ↓
If Incorrect: Expert Provides Feedback
    ↓
Feedback Generates Reward/Punishment Stimulation
    ↓
Biological System Adjusts Processing
    ↓
AI Updates to Match New Biological Response

The Serruya Living Amplifier Innovation:

Building on their NSF proposal and blog posts, we can implement this through:

  • Living electrodes implanted at visual processing hubs (V1, V2, V4, inferotemporal cortex)

  • Subgaleal electrode grids picking up amplified signals from cortical surface aggregates

  • Ultrasonic transducers providing high-bandwidth wireless communication

  • Peritoneal neural constructs extending processing capacity beyond the skull

For humans with neurological conditions (quadriplegia, locked-in syndrome), this provides:

  • Therapeutic benefit through restored function

  • Meaningful contribution to AI development

  • Partnership in shaping emerging intelligence

Architecture 2: The Reconstructed Brain

The Setup:

  • Thousands of brain organoids, assembloids, and organotypic slices

  • Each grown on multi-electrode arrays (64-256 channels)

  • Virtual White Matter platform linking them in real-time

  • Some specialized for visual processing, others for memory, decision-making

  • External ultrasonic/optical/electrical interfaces for stimulation

How It Works:

Distributed Processing:

Medical Image → Converted to Stimulation Pattern
    ↓
"Retinal" Organoids Process Initial Features
    ↓
Virtual White Matter Routes Signals
    ↓
"Visual Cortex" Organoids Build Representations
    ↓
"Temporal" Organoids Provide Memory Context
    ↓
"Parietal" Organoids Integrate Spatial Relationships
    ↓
"Frontal" Organoids Make Decisions
    ↓
AI Records All Internal States

Key Advantages:

  • Scalability: Add thousands of specimens without surgical risk

  • Fault tolerance: Individual organoids can fail without system collapse

  • Specialization: Different tissue types optimized for different functions

  • Experimentation: Test configurations impossible in living brains

  • Ethics: No animal subjects; human tissue from consenting donors



Our proposed capabilities map perfectly:

✓ Virtual White Matter linking distributed specimens

✓ Real-time bidirectional communication (<10ms latency)

✓ Closed-loop reinforcement learning

✓ Neuromorphic chips learning biological response functions

✓ Gaming platform for citizen-science parameter optimization


The Biological Priors That Make This Work

What innate capabilities does biological tissue bring?

Visual Processing:

  • Center-surround receptive fields (edge detection)

  • Orientation selectivity (line and contour detection)

  • Motion sensitivity (temporal pattern recognition)

  • Color opponency (contrast enhancement)

  • Hierarchical feature extraction (simple → complex cells)

Spatial Reasoning:

  • Place cells and grid cells (spatial mapping)

  • Object permanence (predictive models)

  • Size/shape constancy (invariant representations)

  • 3D structure from 2D projections (depth inference)

Learning Mechanisms:

  • One-shot learning (rapid acquisition from single examples)

  • Transfer learning (applying knowledge to novel situations)

  • Attention gating (focusing on relevant features)

  • Predictive coding (updating internal models)

  • Curiosity-driven exploration (active learning)

Anomaly Detection:

  • Novelty response (detecting deviations from expectations)

  • Threat assessment (prioritizing unexpected patterns)

  • Pattern completion (filling in missing information)

  • Error signaling (prediction error drives learning)

These aren't programmed—they're evolved, tested across millions of years. By coupling AI to biological substrates, we inherit this optimization for free.

Building and Exporting Robust 3D Models

The Critical Innovation: Visible Internal Representations

Unlike deep learning's black boxes, this hybrid approach makes internal models interrogable:

During Training:

Medical Image Presented
    ↓
Biological Tissue Processes
    ↓
Record: Which neurons activate?
    ↓
Map: Spatial patterns of activity
    ↓
Track: How representations evolve
    ↓
Export: 3D visualization of learned model

Mountcastle and Hawkins: Cortical Columns as Universal Processors

Vernon Mountcastle's discovery that neocortex uses repeated cortical columns, and Jeff Hawkins' elaboration that these columns build reference frames and predictive models, provides the theoretical foundation.

Each cortical column:

  • Builds a "map" of its input space

  • Predicts what should appear based on context

  • Signals errors when predictions fail

  • Updates its model through learning


For medical imaging, this means:


Normal Anatomy Learning:

Cortical Column in "Liver Module"
    ↓
Builds map of: hepatocyte cords, sinusoids, portal triads
    ↓
Learns relationships: spatial arrangement, size ratios, texture
    ↓
Creates predictive model: "what should I see here?"

Pathology Detection:

Image shows fibrosis
    ↓
Prediction error: cords disrupted, sinusoids collapsed
    ↓
Error signal propagates
    ↓
System recognizes: deviation from learned model
    ↓
Classification: cirrhosis

Mumford's Hierarchical Bayesian Framework

David Mumford and Tai Sing Lee's work on thalamo-cortical loops provides the computational architecture:

  • Bottom-up: Sensory data flows up cortical hierarchy

  • Top-down: Predictions flow down from higher areas

  • Thalamic loop: Mediates prediction errors

  • Learning: Minimizes prediction error over time

For medical AI, this architecture means:

Level 1 (V1-equivalent):

  • Learns: edges, textures, basic shapes in tissue

  • Predicts: "cell boundaries should look like X"

Level 2 (V2-V4 equivalent):

  • Learns: cellular architecture, tissue organization

  • Predicts: "hepatocytes should arrange in cords"

Level 3 (IT/Parietal equivalent):

  • Learns: anatomical structures, spatial relationships

  • Predicts: "portal triad should contain these three structures"

Level 4 (Frontal equivalent):

  • Learns: organ-level organization, disease patterns

  • Predicts: "cirrhosis typically shows these features"


The Shared Editable Model: A Revolutionary Teaching Interface

Here's where the bio-AI hybrid system provides something genuinely unprecedented: Because biological tissue builds explicit spatial representations (unlike distributed deep learning weights), the system can render its internal models as interactive 3D environments that both AI and humans can navigate and modify together.


How It Works:

Initial Teaching Phase:

Human Instructor Creates Base Model
    ↓
Opens 3D modeling interface
    ↓
Imports: Anatomical atlas, histology reconstructions
    ↓
Annotates: "This is a hepatocyte cord"
    ↓
                "This is a sinusoid"
    ↓
                "This is a portal triad"
    ↓
Bio-AI System Observes
    ↓
Neural tissue: Processes spatial relationships
    ↓
AI component: Records instructor's structural annotations
    ↓
Both: Build corresponding internal representation

The Shared Workspace:


Imagine a collaborative 3D environment where:


Instructor can:

  • Fly through virtual liver tissue at any scale

  • Highlight structures with color coding

  • Draw boundaries around cell types

  • Animate processes (blood flow, bile secretion)

  • Rotate, slice, and reassemble organs

  • Show disease progression over time

  • Compare normal vs. pathological side-by-side


Bio-AI System simultaneously:

  • Watches instructor's manipulations

  • Records which structures are emphasized

  • Learns spatial relationships being taught

  • Updates its internal neural representations

  • Begins predicting what instructor will highlight next


The Revolutionary Part: Bi-directional Editing

Unlike teaching human students, the bio-AI hybrid can externalize its learning back into the shared model:


After Learning Phase:

Bio-AI Reviews New Case (without instructor)
    ↓
Recognizes: Pattern not in training set
    ↓
Updates: Internal 3D representation
    ↓
Exports: Modifications to shared model
    ↓
Instructor Opens Model Next Day
    ↓
Sees: Bio-AI has annotated new structures
    ↓
System Explains: "Found variant portal triad architecture"
    ↓
                "Added annotation showing 4 vessels instead of 3"
    ↓
                "Frequency: observed in 3% of cases"
    ↓
Instructor Can:
    ↓
    - Confirm: "Yes, that's a known variant"
    ↓
    - Correct: "No, that's an artifact"
    ↓
    - Expand: "Interesting! Let's explore this..."

What Makes This Unique:


For Human Learners (Traditional):

  • See images and diagrams

  • Build mental models internally

  • Cannot easily share their mental models

  • Require graphic design skills to visualize understanding

  • Time-consuming to create anatomical illustrations


For Bio-AI Hybrid:

  • Builds explicit 3D spatial representations (from neural tissue)

  • Automatically renders internal models as explorable environments

  • Updates models in real-time as learning occurs

  • Shares models bidirectionally with instructors

  • No graphic design skills needed—direct brain-to-visualization


The Technical Implementation:

class SharedEditableModel:
    def __init__(self):
        self.spatial_graph = {}  # 3D structure relationships
        self.annotations = {}     # Instructor labels
        self.neural_encoding = {} # Bio-AI internal states
        self.edit_history = []    # Track all modifications
        
    def instructor_edit(self, structure, annotation):
        """Human instructor modifies model"""
        self.annotations[structure] = annotation
        self.trigger_bio_ai_update(structure)
        self.record_edit('instructor', structure, annotation)
    
    def bio_ai_update(self, new_structure):
        """Bio-AI discovers and adds new information"""
        # Extract spatial representation from neural tissue
        neural_pattern = self.read_biological_state()
        
        # Convert to 3D geometric structure
        geometry = self.neural_to_spatial(neural_pattern)
        
        # Add to shared model with confidence scores
        self.spatial_graph[new_structure] = {
            'geometry': geometry,
            'confidence': self.calculate_confidence(),
            'supporting_cases': self.get_examples(),
            'awaiting_review': True
        }
        
        # Flag for instructor review
        self.notify_instructor(new_structure)
        
    def collaborative_session(self):
        """Real-time co-editing"""
        while session_active:
            # Instructor navigates/edits
            if instructor_action:
                self.instructor_edit(...)
                # Bio-AI watches and learns
                self.bio_ai_observes(instructor_action)
            
            # Bio-AI suggests additions
            if bio_ai_discovery:
                self.bio_ai_update(...)
                # Instructor reviews and confirms/corrects
                self.request_instructor_feedback()

Example Use Cases:

Use Case 1: Normal Anatomy

Week 1: Liver Architecture
    ↓
Instructor: Creates 3D liver model
    - Draws hexagonal lobules
    - Annotates portal triads at corners
    - Shows central veins in centers
    - Animates blood flow direction
    ↓
Bio-AI: Observes and learns
    - Neural tissue: Builds spatial map
    - Predicts: Blood flows periphery → center
    - Encodes: Hexagonal symmetry
    ↓
Week 2: Bio-AI Encounters New Slides
    - Processes 1000 liver sections
    - Discovers: Some lobules are pentagonal
    - Updates: Model to show variation
    ↓
Model Now Shows:
    - Instructor's original hexagonal template
    - Bio-AI's added pentagonal variants
    - Statistical distribution (80% hex, 15% pent, 5% other)
    - Confidence scores for each observation

Use Case 2: Pathology Learning

Instructor: Shows cirrhosis case
    ↓
Opens shared model, modifies normal liver:
    - Adds: Fibrotic septa
    - Distorts: Lobular architecture
    - Shows: Regenerative nodules
    - Animates: Progressive destruction
    ↓
Bio-AI: Processes changes
    - Neural tissue: Recognizes "deviation from normal"
    - Compares: Current vs. stored normal model
    - Identifies: Specific differences
    - Generates: Prediction error signals
    ↓
Next Day: Bio-AI reviews new case
    - Recognizes: Similar pattern
    - Updates model: Adds this variant
    - Annotates: "Different fibrosis distribution"
    - Flags: For instructor review
    ↓
Instructor: Reviews Bio-AI's addition
    - Confirms: "Yes, this is biliary cirrhosis"
    - Expands: Adds clinical correlation
    - Links: To relevant biochemical markers

Use Case 3: Rare Disease Discovery

Month 6: Bio-AI encounters unusual case
    ↓
Processes tissue showing:
    - Architecture: Partially preserved
    - But: Strange inclusion bodies in hepatocytes
    - Pattern: Not in training set
    ↓
Bio-AI Creates New Annotation:
    - Adds: Inclusion bodies to 3D model
    - Shows: Their distribution pattern
    - Links: To 3 similar cases found in database
    - Suggests: "Possible metabolic storage disease"
    - Confidence: 45% (low - needs review)
    ↓
Instructor Logs In:
    - Sees: Bio-AI's proposed addition
    - Reviews: Supporting evidence
    - Confirms: "Excellent! This is α1-antitrypsin deficiency"
    - Expands: Adds PAS-D stain characteristics
    - Updates: Model confidence to 95%
    ↓
Model Now Contains:
    - Instructor's expert diagnosis
    - Bio-AI's spatial pattern detection
    - Combined: Better than either alone

The Gaming Platform Integration:

Players don't just interact with static images—they explore the shared 3D model:

Beginner Level:
    - Navigate: Pre-built anatomical models
    - Task: Find and label structures
    - Feedback: Bio-AI highlights correct regions
    - Learning: Visual-spatial anatomy

Intermediate Level:
    - Build: Tissue architecture from components
    - Task: Arrange cells in proper organization
    - Feedback: Bio-AI shows prediction errors
    - Learning: Structural relationships

Advanced Level:
    - Modify: Normal models to show pathology
    - Task: Create disease progression animations
    - Feedback: Bio-AI evaluates biological plausibility
    - Learning: Disease mechanisms

Expert Level:
    - Review: Bio-AI's novel discoveries
    - Task: Confirm or correct new annotations
    - Feedback: How well you match consensus
    - Learning: Advanced diagnostic reasoning

Why This Transforms Medical Education:


Traditional Approach:

  • Textbook: Static 2D diagrams

  • Lectures: Instructor draws on whiteboard

  • Labs: Look at fixed specimens

  • Learning: Slow, sequential, individual


Shared Model Approach:

  • Dynamic: 3D explorable environments

  • Interactive: Edit and manipulate in real-time

  • Collaborative: Multiple learners co-explore

  • Accelerated: See spatial relationships immediately

  • Persistent: Model accumulates knowledge over time


The Unique Advantage:


No human medical student can:

  • Instantly externalize their mental model as a 3D visualization

  • Update anatomical diagrams as they learn

  • Share their evolving understanding in explorable format

  • Automatically flag novel patterns for expert review

  • Create professional-quality medical illustrations without training


But the bio-AI hybrid does this automatically because:

  1. Neural tissue builds explicit spatial representations

  2. Those representations can be read via electrodes

  3. AI component converts neural activity to 3D geometry

  4. Rendering happens in real-time

  5. Models are continuously updated with new learning


The Collaborative Learning Loop:

Cycle 1: Instructor → Bio-AI
    Instructor: Teaches normal anatomy
    Bio-AI: Learns spatial structure
    Model: Populated with baseline knowledge

Cycle 2: Bio-AI → Instructor
    Bio-AI: Processes thousands of cases
    Discovers: Patterns, variations, anomalies
    Model: Auto-updated with discoveries
    Instructor: Reviews and validates

Cycle 3: Bio-AI ↔ Instructor ↔ Gaming Community
    Gaming Players: Explore and interact with models
    Bio-AI: Learns from human navigation patterns
    Instructor: Curates and refines content
    Model: Becomes comprehensive educational resource

Cycle 4: Model → Medical Community
    Published: Open-access anatomical atlas
    Content: Combination of expert knowledge + bio-AI discoveries
    Interactive: Anyone can explore
    Updated: Continuously as bio-AI learns more

Exporting the Model:

The shared model becomes a living textbook:

# Pseudo-code for model export
shared_model = {
    'anatomy_map': {
        'liver_lobule': {
            'structure': spatial_arrangement_matrix,
            'normal_variants': instructor_annotations + bio_ai_discoveries,
            'pathology_patterns': disease_modifications,
            'edit_history': who_added_what_when,
            'confidence_scores': reliability_metrics,
            'supporting_evidence': linked_case_images
        }
    },
    'visualization': {
        'render_settings': optimal_viewing_parameters,
        'color_schemes': structure_highlighting,
        'animation_paths': guided_tours,
        'interaction_hints': how_to_explore
    },
    'pedagogy': {
        'learning_sequence': recommended_exploration_order,
        'common_mistakes': what_students_confuse,
        'key_insights': critical_understanding_points
    }
}

This exported model can:

  • Visualize what the system "sees" and "expects"

  • Show instructor's teaching and bio-AI's discoveries distinctly

  • Demonstrate prediction failures (explainability)

  • Transfer to pure AI systems

  • Guide human expert verification

  • Serve as interactive textbook

  • Enable VR/AR medical education

  • Update continuously with new knowledge


Expert Mentorship: The Human in the Loop


The Pedagogical Progression:

Like medical training, learning occurs in stages:


Year 1 (Normal Anatomy):

  • Present: Normal histology slides, labeled anatomical structures

  • Biological system: Builds spatial representations

  • AI system: Learns to predict biological responses

  • Expert feedback: Corrects misidentifications

  • Gaming platform: Citizen scientists help label structures

Year 2 (Common Pathology):

  • Present: Frequent diseases (inflammation, neoplasia, fibrosis)

  • Biological system: Recognizes deviations from normal model

  • AI system: Learns disease signatures

  • Expert feedback: Explains diagnostic reasoning

  • Gaming platform: Players learn pattern recognition

Year 3 (Rare Diseases & Edge Cases):

  • Present: Unusual presentations, diagnostic challenges

  • Biological system: Uses analogical reasoning

  • AI system: Learns to generalize from principles

  • Expert feedback: Demonstrates expert-level thinking

  • Gaming platform: Difficult cases become advanced challenges

The Reinforcement Loop:

Inspired by the Serruya-Friedman proposal:

For Correct Responses:

System makes diagnosis
    ↓
Expert confirms: Correct
    ↓
Reward stimulation to biological tissue
    ↓
Strengthen: Neural pathways that led to correct answer
    ↓
AI learns: Pattern that leads to expert confirmation

For Errors:

System makes diagnosis
    ↓
Expert corrects: Wrong, here's why
    ↓
Punishment/inhibition to biological tissue
    ↓
Weaken: Neural pathways that led to error
    ↓
Alternative stimulation: Guide toward correct pattern
    ↓
AI learns: How expert reasoning differs from initial response

Beyond Supervised Learning:

The gaming platform enables:

  • Socratic dialogue: Players ask questions about tissue features

  • Progressive hints: System reveals diagnostic clues gradually

  • Peer learning: Multiple players discuss same case

  • Error analysis: Review why misdiagnoses occurred

  • Metacognition: System learns how to learn, not just what

Neuromorphic Integration: From Biology to Silicon

The Serruya-Friedman Breakthrough:

Their iterative refinement approach maps perfectly to medical training:

Phase 1: Characterize Biological Response
    ↓
Show liver slide to biological system
    ↓
Record: Neural activity patterns
    ↓
Map: Adaptive response function R(t) = G[I(t) | H(t), Θ(t)]
Phase 2: Instantiate in Neuromorphic Hardware
    ↓
Design: STT-MTJ circuit that replicates biological response
    ↓
Test: Do both systems respond similarly?
    ↓
Divergence: Where do responses differ?
Phase 3: Iterative Convergence
    ↓
Present novel slide to both systems
    ↓
Biological responds: Pattern A
    ↓
Neuromorphic responds: Pattern A'
    ↓
If |A - A'| > threshold: Update neuromorphic design
    ↓
Verify: Prior responses still accurate
    ↓
Track: As biological system learns, update silicon

The Result:

  • Neuromorphic chips that think like pathologists

  • Energy-efficient inference (μW vs. kW for GPUs)

  • Deployable to edge devices (microscopes, portable scanners)

  • Maintain biological grounding (interpretable decisions)

  • Continue learning through life (like human experts do)


Part III: Application to Medical Imaging - Beyond "Let the Data Speak"


The Current Paradigm: Pure Pattern Recognition


How Medical AI Works Today:

Step 1: Gather millions of labeled images
Step 2: Train convolutional neural network
Step 3: Optimize: minimize classification error
Step 4: Deploy: system recognizes patterns

The Successes:

  • Diabetic retinopathy detection (94% sensitivity)

  • Metastatic breast cancer in lymph nodes (99% sensitivity)

  • Skin cancer classification (dermatologist-level)

  • Lung nodule detection (reduced false negatives)

The Failures:

  • Adversarial examples (minor pixel changes → misclassification)

  • Distribution shift (trained on one scanner, fails on another)

  • Rare diseases (no training examples available)

  • Novel presentations (can't generalize beyond training set)

  • Lack of reasoning ("this is cancer" but can't explain why)

The Alternative: Learning Internal Models

How Human Experts Learn:

Medical students don't memorize millions of images. They:

First: Study normal anatomy and histology

  • Learn: 3D structure of organs

  • Understand: How tissue should look microscopically

  • Build: Internal reference models

  • Practice: Recognizing normal variants

Second: Learn systematic pathology

  • Study: How diseases alter structure

  • Understand: Mechanisms of injury

  • Build: Models of pathological processes

  • Practice: Recognizing deviations from normal

Third: Clinical reasoning

  • Integrate: Patient context, clinical history

  • Generate: Differential diagnoses

  • Test: Hypotheses against evidence

  • Refine: Based on additional information

The Result:

A pathologist looking at liver tissue doesn't just match patterns—they reason:

"I see hepatocyte cords disrupted by fibrosis...
The sinusoids are collapsed...
Portal triads show inflammatory infiltrate...
The architecture suggests chronic injury...
Pattern consistent with cirrhosis...
Now, what's the etiology? Alcohol? Viral? Metabolic?"

The Hybrid Bio-AI Approach: Best of Both Worlds

Phase 1: Normal Anatomy Foundation

Month 1-6: Building the Internal Model

For Biological System:

Present: Normal liver slides
    ↓
Visual processing: Extracts structures
    ↓
Spatial mapping: Builds 3D representation
    ↓
Predictive model: "Liver looks like this"

For AI System:

Record: Biological responses
    ↓
Learn: Neural encoding of structures
    ↓
Predict: What biological system will see
    ↓
Build: Corresponding computational model

For Experts:

Label: Anatomical structures
    ↓
Provide: 3D reconstructions
    ↓
Teach: Spatial relationships
    ↓
Verify: System understands correctly

The Gaming Platform:

Players: Explore virtual liver tissue
    ↓
Tasks: Identify structures, trace blood vessels
    ↓
Feedback: Neural tissue responses guide players
    ↓
Data: Millions of interactions train system

Key Insight - Mountcastle's Columnar Organization:

Each "liver organoid module" in the reconstructed brain builds a map of hepatic architecture—just as your visual cortex has a topographic map of visual space. The organoid becomes a liver model.

Testing the Model:

Show: Liver at unusual angle
    ↓
Biological system: Recognizes structure anyway
    ↓
(Using 3D model, not memorized 2D patterns)
    ↓
AI learns: Rotation/scale invariance
Show: Liver from different species
    ↓
Biological system: Identifies homologous structures
    ↓
(Using general principles, not species-specific training)
    ↓
AI learns: Transferable representations

Phase 2: Pathology as Deviation

Month 7-18: Learning Disease

Critical Difference from Pure AI:

Standard CNN: "This image belongs to class 'cirrhosis'" Hybrid System: "This image deviates from normal in these specific ways that indicate cirrhosis"

How It Works:

Present: Cirrhotic liver slide
    ↓
Biological system: Compares to normal model
    ↓
Error signals: Where predictions fail
    ↓
- Expected: Regular hepatocyte cords
    ↓
- Observed: Fibrotic nodules
    ↓
- Prediction error: HIGH
    ↓
Pattern recognized: Architectural distortion
    ↓
Classification: Cirrhosis

The Mumford-Lee Hierarchical Bayesian Framework:

Top-down prediction (before seeing pathology):

High-level: "This should be liver"
    ↓
Mid-level: "Should see lobular architecture"
    ↓
Low-level: "Should see cords, sinusoids, triads"

Bottom-up evidence (pathology present):

Low-level: "Seeing fibrous bands"
    ↓
Mid-level: "Architecture distorted"
    ↓
High-level: "Not normal liver"
    ↓
Prediction error propagates up hierarchy
    ↓
System updates: "This is pathological"

Why This Matters:

  • Explainability: Can show which predictions failed

  • Generalization: Applies to novel disease presentations

  • Few-shot learning: Doesn't need millions of examples

  • Biological plausibility: Matches how experts think


Phase 3: Expert-Level Reasoning

Month 19-36: Rare Diseases and Differential Diagnosis


The Challenge:

Rare diseases might have only dozens of documented cases. Pure pattern matching fails. Human reasoning succeeds.


How Biological Systems Handle This:

Analogical Reasoning:

New disease: Never seen before
    ↓
But shares features with known diseases
    ↓
Biological system: Activates partial patterns
    ↓
"Similar to X in feature A"
    ↓
"Similar to Y in feature B"
    ↓
"But distinct in feature C"
    ↓
Hypothesize: Novel entity or rare variant

Mechanistic Understanding:

Biological system: Models disease processes
    ↓
"If viral hepatitis → expect lymphocyte infiltration"
    ↓
"If toxic injury → expect zone-specific necrosis"
    ↓
"If metabolic → expect storage material"
    ↓
Observed pattern → Infer mechanism → Suggest diagnosis

The Evolutionary Advantage:


Animals evolved to:

  • Recognize new predators (based on features of known threats)

  • Identify edible plants (based on family resemblance)

  • Navigate novel terrain (using spatial reasoning)

  • Learn from single exposures (one encounter with poison is enough)


These capabilities transfer to medical diagnosis when visual cortex is entrained by medical training.


Ethological and Cross-Species Insights

What Animal Learning Teaches Us:

Corvids (crows, ravens):

  • Tool use requires 3D spatial model

  • One-trial learning from observation

  • Transfer learning to novel situations

  • Suggests importance of world models for flexible intelligence

Octopuses:

  • Distributed nervous system

  • Each arm has local processing

  • Central coordination creates coherent behavior

  • Parallels our distributed organoid approach

Honeybees:

  • Tiny brains (< 1 million neurons)

  • Complex spatial mapping

  • Symbolic communication (waggle dance)

  • Shows efficiency of biological computation

Rats:

  • Hippocampal place cells build spatial maps

  • Grid cells provide metric coordinate system

  • Rapid one-shot spatial learning

  • Direct parallel to medical image spatial reasoning


Key Principle:

Biological intelligence achieves sophisticated reasoning with remarkably few neurons because it builds explicit models and uses evolved priors. By coupling AI to biological substrates, we inherit this efficiency.


Contrast with Current Approaches

vs. Tenenbaum's Probabilistic Programs:

Tenenbaum: Builds Bayesian models of infant cognition, tries to implement in computational frameworks

Our Approach: Uses actual biological tissue doing actual development—no need to formalize what we don't fully understand

Advantage: Inherits evolved priors directly rather than approximating them

vs. LeCun's World Models:

LeCun: Argues AI should build predictive models of the world like babies do

Our Approach: Couples AI to biological systems that already build such models

Advantage: Starts with working world models rather than learning from scratch

vs. DeepMind's Intuitive Physics:

DeepMind: Trains models to show "surprise" when physics violated

Our Approach: Biological systems innately have intuitive physics; AI learns from observing biological surprise responses

Advantage: No need to define what "surprise" means—biology shows us

vs. Marcus's Hybrid Architectures:

Marcus: Argues need to combine neural networks with symbolic reasoning

Our Approach: Biological cortex already implements symbol-like representations; AI learns both the representations and the reasoning

Advantage: Discovers the right hybrid architecture through observation rather than design

vs. Developmental Robotics:

DevRob: Builds robots that learn through staged development

Our Approach: Uses biological development directly, not robotic approximation

Advantage: True developmental trajectories rather than engineered imitations

The Complete Training Pipeline

Stage 1: Foundational (Weeks 1-8)

Week 1-2: Basic Tissue Types
- Biological: Learns epithelium, connective tissue, muscle, nerve
- AI: Predicts biological responses to tissue categories
- Expert: Labels and explains tissue classification
- Gaming: Players identify tissue types in varied contexts
- Export: Basic tissue recognition models

Week 3-4: Organ Systems Introduction  
- Biological: Builds models of major organ architecture
- AI: Learns spatial relationships between structures
- Expert: Provides 3D anatomical context
- Gaming: Virtual dissection and structure tracing
- Export: Anatomical reference frameworks

Week 5-6: Microscopic Normal Anatomy
- Biological: High-resolution cellular architecture
- AI: Learns cell types and tissue organization
- Expert: Correlates gross and micro anatomy
- Gaming: Multi-scale navigation challenges
- Export: Hierarchical anatomical models

Week 7-8: Normal Variants and Artifacts
- Biological: Learns acceptable variation ranges
- AI: Distinguishes true pathology from artifacts
- Expert: Shows common pitfalls
- Gaming: "Spot the difference" challenges
- Export: Decision boundaries for normal vs. abnormal

Stage 2: Pathological Processes (Weeks 9-24)

Week 9-12: Cell Injury and Adaptation
- Biological: Recognizes atrophy, hypertrophy, metaplasia
- AI: Learns cellular stress responses
- Expert: Explains mechanisms
- Gaming: Disease progression simulations
- Export: Process models for common injuries

Week 13-16: Inflammation and Repair
- Biological: Identifies inflammatory patterns
- AI: Learns temporal evolution
- Expert: Correlates with clinical syndromes  
- Gaming: Time-series prediction challenges
- Export: Dynamic disease models

Week 17-20: Neoplasia Fundamentals
- Biological: Distinguishes benign from malignant
- AI: Learns architectural and cytologic criteria
- Expert: Teaches diagnostic algorithms
- Gaming: Grading and staging exercises
- Export: Tumor classification frameworks

Week 21-24: Organ-Specific Pathology
- Biological: Integrates anatomy with disease patterns
- AI: Learns disease-specific features per organ
- Expert: Provides differential diagnoses
- Gaming: Complex case presentations
- Export: Comprehensive diagnostic models

Stage 3: Advanced Reasoning (Weeks 25-36)

Week 25-28: Rare Diseases
- Biological: Applies analogical reasoning
- AI: Learns few-shot classification
- Expert: Demonstrates expert thought process
- Gaming: Challenging cases from rare disease databases
- Export: Generalizable reasoning frameworks

Week 29-32: Atypical Presentations
- Biological: Updates models with exceptions
- AI: Learns distribution of variants
- Expert: Explains when rules break down
- Gaming: "Diagnosis mystery" scenarios
- Export: Uncertainty quantification models

Week 33-36: Integrated Diagnosis
- Biological: Combines histology, imaging, clinical data
- AI: Multi-modal integration
- Expert: Demonstrates comprehensive workup
- Gaming: Full diagnostic challenges with feedback
- Export: Complete clinical reasoning system

Validation and Deployment

Testing the System:

Benchmark Comparisons:

Standard CNN trained on 1M images
vs.
Hybrid Bio-AI trained on 10K images with expert feedback

Metrics:

  • Accuracy on common diseases (should match CNNs)

  • Accuracy on rare diseases (should exceed CNNs)

  • Adversarial robustness (should exceed CNNs)

  • Explainability scores (should vastly exceed CNNs)

  • Learning efficiency (should exceed CNNs)

  • Generalization to new scanners/stains (should exceed CNNs)

Deployment Options:

Level 1: Neuromorphic Chips

  • Energy-efficient edge deployment

  • Maintains biological grounding

  • Suitable for point-of-care devices

  • Can update through continued learning

Level 2: Hybrid Cloud System

  • Maintains connection to biological tissue

  • Handles difficult cases

  • Continuous learning from new cases

  • Expert consultation for edge cases

Level 3: Exported Models

  • Traditional AI using learned representations

  • Deployed where neuromorphic unavailable

  • Retains interpretability through exported structure

  • Periodic re-grounding with biological system


Why This Transforms Medical AI

Current Limitations Addressed:

Problem: Needs millions of training examples ✓ Solution: Few-shot learning through biological priors

Problem: Fails on distribution shift ✓ Solution: Learns transferable 3D models, not dataset-specific patterns

Problem: No reasoning or explainability ✓ Solution: Explicit internal models can be interrogated

Problem: Can't handle rare diseases ✓ Solution: Analogical reasoning from principles

Problem: Brittle to adversarial examples ✓ Solution: Biological robustness through evolved priors

Problem: Requires retraining for new domains ✓ Solution: Continues learning like human experts

The Clinical Impact:

For Common Diseases:

  • Matches current AI accuracy

  • But provides explanations

  • Catches atypical presentations

  • Reduces false positives

For Rare Diseases:

  • Exceeds current AI (which often fails completely)

  • Provides differential diagnosis

  • Suggests additional tests

  • Learns from single examples

For Novel Diseases:

  • Can reason about never-seen-before entities

  • Uses mechanistic understanding

  • Generates hypotheses

  • Facilitates discovery

For Medical Education:

  • Gaming platform teaches medical students

  • Makes pathology engaging and interactive

  • Provides immediate feedback

  • Tracks learning progress

  • Scales expert teaching


Conclusion: A New Paradigm for Medical AI


The future of medical artificial intelligence isn't about replacing human expertise with pattern-matching algorithms. It's about creating hybrid systems that learn the way humans learn—building internal models, reasoning from principles, generalizing from limited examples, and explaining their thinking.


By coupling AI to biological neural tissue—whether in living systems or reconstructed brains—we inherit millions of years of evolutionary optimization. The biological substrate provides innate priors, developmental scaffolding, and robust learning mechanisms that current AI lacks.


This approach bridges our vision of biological-silicon hybrid intelligence with the urgent clinical need for AI that truly understands anatomy and pathology. It transforms medical image analysis from pattern recognition to genuine comprehension.


The technology exists. The biological understanding exists. The clinical need is urgent. The gaming platform can engage millions in teaching emerging intelligence.


What we need now is the vision to put it together—and the commitment to do it right.


This isn't just about building better diagnostic tools. It's about establishing a new relationship between human expertise and artificial intelligence—one where AI learns from us, with us, and ultimately alongside us.


The hybrid future of medical AI is beginning.


The question is: Will we build it with the care, wisdom, and humanity it deserves?


For more information:


This work builds on foundational research in biological computing, neuromorphic hardware, and medical artificial intelligence, with support from the Fitzgerald Translational Neuroscience Fund.

Comments


bottom of page