Version: 1.3
NeuroPersona is not a static analysis tool, but a bio-inspired simulation platform designed to replicate the dynamic and often variable processes of human cognition and emotion when engaging with a topic or question. Instead of producing a single, deterministic answer, NeuroPersona explores different plausible "thought paths" or "perspectives" that may emerge in response to a given input.
The system models interacting cognitive modules (creativity, criticism, simulation, etc.), an adaptive value system, a dynamic emotional state (based on the PAD model), and neural plasticity.
Each simulation run represents a unique snapshot of a possible cognitive/affective state.
NeuroPersona’s approach differs fundamentally from traditional AI models:
Variability as a Feature:
The system is inherently non-deterministic. Repeated runs with the same input will produce different yet internally plausible end states due to random weight initialization, stochastic activation noise, emotional dynamics, and path-dependent learning and plasticity processes.
This mirrors the natural variability of human thinking.
Emergent Perspectives:
Each simulation run can be seen as a unique "thought process" prioritizing different aspects of a topic (sometimes innovation, sometimes safety, sometimes fundamentals).
The result is not "right" or "wrong" — it is a simulated, plausible perspective.
State Interpretation:
The goal is to understand the final cognitive and emotional state within a single run:
Exploration of Possibility Space:
By simulating multiple runs (optionally with slightly varied parameters), you can explore the space of possible cognitive reactions to a topic, rather than focusing on a single definitive answer.
Dynamic Input Processing:
Utilizes a (simulated) "Perception Unit" to transform user prompts into structured data.
Modular Cognitive Architecture:
Simulates interacting modules:
CortexCreativus
: Idea generation and associative thinking.CortexCriticus
: Analysis, evaluation, and risk assessment.SimulatrixNeuralis
: Scenario thinking and mental simulation.LimbusAffektus
: Dynamic emotional state modeling (Pleasure, Arousal, Dominance).MetaCognitio
: Monitoring of network states and adaptive strategic adjustments (e.g., learning rate tuning).CortexSocialis
: Modeling of social influence factors.Adaptive Value System:
Internal values (e.g., innovation, safety, ethics) influence behavior and dynamically adjust through simulation.
Neural Plasticity:
Simulates structural changes (connection pruning and sprouting) and activity-dependent learning (Hebbian learning, reinforcement).
Stochasticity:
Purposeful use of randomness to emulate biological variability.
Persistent Memory:
Long-term storage and retrieval of relevant information via SQLite database.
Reporting and Visualization:
Generates detailed HTML reports and plots analyzing network dynamics and end states.
Orchestration:
The orchestrator.py
script controls the complete workflow from prompt to final enriched response (optionally integrating an external LLM API like Gemini).
orchestrator.py
)Perception:
A user prompt is converted into structured data (simulated CSV/DataFrame) via gemini_perception_unit.py
.
Cognition/Simulation:
This data is fed into neuropersona_core.py
. The network is initialized and simulated over a number of epochs, where learning, emotions, values, and plasticity interact.
Synthesis (Optional):
The results (report, structured data) are used to generate a final, contextually enriched answer, potentially involving an external LLM API (generate_final_response
in orchestrator.py
).
neuropersona_core.py
)Classes:
Node
, MemoryNode
, ValueNode
, Connection
, specialized module classes (as listed above), PersistentMemoryManager
.
Core Functions:
simulate_learning_cycle
, calculate_value_adjustment
, update_emotion_state
, hebbian_learning
, apply_reinforcement
, prune_connections
, sprout_connections
, generate_final_report
, create_html_report
, plotting utilities.
Parameters:
Numerous constants control learning rates, decay rates, thresholds, emotional dynamics, and allow fine-tuning of system behavior.
Clone the Repository:
git clone <repository-url> cd <repository-folder>
Create a Virtual Environment (recommended):
python -m venv venv # Windows venv\Scripts\activate # MacOS/Linux source venv/bin/activate
Install Dependencies:
(Make sure a requirements.txt
exists)
pip install -r requirements.txt # Required: pandas, numpy, matplotlib # Optional: networkx, tqdm, google-generativeai
Set API Key (Optional):
If you want to use full orchestration with external LLM (e.g., Gemini):
# Windows (PowerShell) $env:GEMINI_API_KEY="YOUR_API_KEY" # Windows (CMD) set GEMINI_API_KEY=YOUR_API_KEY # MacOS/Linux export GEMINI_API_KEY='YOUR_API_KEY'
You can run a full simulation either through the GUI or directly through the orchestrator:
Start GUI:
python neuropersona_core.py
(The GUI allows you to enter prompts, adjust key simulation parameters, and start the full workflow.)
Run Orchestrator Directly:
python orchestrator.py
(The script will prompt you for input if run directly.)
Remember the core philosophy:
Focus on Single Run Interpretation:
Analyze the generated HTML report and plots for this specific simulation run.
Look at the State:
How do dominant categories, module activities, values, and emotions interact? Is the resulting "profile" internally coherent?
Avoid Rigid Comparisons:
Do not expect identical results between runs. Observe the range of plausible states.
Value Saturation (Values at 1.0):
Often a sign of rapid learning given limited data. Interpret this as "maximum relevance in this run," while recognizing that differentiation at the top end is lost.
"Inconsistencies" are Valid:
If, for example, Cortex Criticus
is highly active while the Safety
value remains low, it still represents a valid cognitive stance — not an error.
neuropersona_core.py
Constants)DEFAULT_EPOCHS
: Number of simulation cycles.DEFAULT_LEARNING_RATE
: Base learning rate.DEFAULT_DECAY_RATE
: Rate of activation/weight decay without input (important against saturation).VALUE_UPDATE_RATE
: Speed of internal value adjustments.EMOTION_UPDATE_RATE
, EMOTION_DECAY_TO_NEUTRAL
: Control emotional dynamics.PRUNING_THRESHOLD
, SPROUTING_THRESHOLD
: Control structural plasticity.Fine-tuning these parameters (via GUI or settings files) affects the dynamics and differentiation capability of the system.
Imagine you ask a calculator, "What is 2 + 2?" — you always get "4". That’s a deterministic system.
NeuroPersona is different. Imagine asking a person a complex question like:
"Should we heavily invest in a new, risky technology?"
On Day 1, feeling optimistic and inspired by success stories, the answer might be:
"Absolutely! Huge opportunities — we must innovate!" (Focus: innovation, opportunity).
On Day 2, after reading about similar failures and feeling cautious, the answer could be:
"Careful! We must assess risks first and ensure ethical responsibility." (Focus: safety, ethics, risk assessment).
On Day 3, feeling highly analytical, the person might say:
"Let's first analyze the fundamentals and long-term efficiency impacts." (Focus: fundamentals, efficiency).
All these answers are plausible human reactions, depending on internal "mood" (emotions), "priorities" (values), and currently salient information.
NeuroPersona replicates exactly this kind of variability:
Thus, if NeuroPersona delivers different outcomes across runs, it’s not an error — it’s a feature.
It simulates different but coherent cognitive perspectives, illustrating the diversity of plausible cognitive-emotional responses to a problem.
Diagramm 1: GUI & Workflow-Start
graph TD %% ============================================= %% == 1. GUI & Initialisierung == %% ============================================= StartApp["User startet neuropersona_core.py"] --> CallStartGUI["rufe start_gui()"] %% ^-- Text in Anführungszeichen, Kommentar auf neuer Zeile CallStartGUI --> InitGUI["Initialisiere Tkinter GUI"] InitGUI --> LoadSettingsCheck{"Existiert<br>settings.json?"} LoadSettingsCheck -- Ja --> CallLoadGUISettings["rufe load_gui_settings()"] CallLoadGUISettings --> UpdateGUIWidgets["Update GUI-Widgets"] UpdateGUIWidgets --> GUIReady["GUI bereit"] LoadSettingsCheck -- Nein --> GUIReady GUIReady --> UserInteraction["User Interaktion"] UserInteraction -- Klick 'Params speichern' --> CallSaveGUISettings["rufe save_gui_settings()"] CallSaveGUISettings --> WriteJSON["Schreibe JSON"] WriteJSON --> GUIReady UserInteraction -- Klick 'Params laden' --> CallLoadGUISettings UserInteraction -- Klick 'Workflow starten' --> CallStartWorkflowAction["rufe start_full_workflow_action()"] CallStartWorkflowAction --> GetGUIInput["Hole GUI-Input"] GetGUIInput --> ValidateInput{"Input gültig?"} ValidateInput -- Nein --> ShowErrorMsg["Zeige Fehler"] ShowErrorMsg --> GUIReady ValidateInput -- Ja --> DisableStartButton["Deaktiviere Start-Button"] DisableStartButton --> CreateWorkflowThread["Erstelle & Starte Workflow Thread<br>(run_workflow_in_thread)"] CreateWorkflowThread --> UpdateGUIStatusInit["Update GUI Status: 'Starte...'"] CreateWorkflowThread --> ToDiagram2["(Siehe Diagramm 2: Workflow Thread)"] %% Ende dieses Diagramms
Diagramm 2: Workflow Thread & Orchestrator (High-Level)
graph TD %% ============================================= %% == 2. Workflow Thread & Orchestrator == %% ============================================= WFThreadStart("Workflow Thread Start<br>(run_workflow_in_thread)") --> ImportOrchestrator{"Importiere<br>orchestrator?"} ImportOrchestrator -- Nein --> WF_ERR_Import["Fehler"] --> UpdateGUIStatusErrorImp["Update GUI Status"] --> WFThreadEndError(Error End) ImportOrchestrator -- Ja --> GetExecFunc["Hole execute_full_workflow"] GetExecFunc --> CallOrchestrator["Rufe orchestrator.execute_full_workflow"] subgraph Orchestrator_Execute [Orchestrator: execute_full_workflow] direction TB OrchStart(Start) --> Step1_InputData["Schritt 1:<br>get_input_data"] Step1_InputData --> ValidateInputDF{"Input DF OK?"} ValidateInputDF -- Nein --> Orch_ERR_InputData["Error"] --> OrchEndError(Error) ValidateInputDF -- Ja --> Step2_NeuroPersona["Schritt 2:<br>run_neuropersona<br>(Siehe Diagramm 3)"] Step2_NeuroPersona --> ValidateNPResults{"NP Results OK?"} ValidateNPResults -- Nein --> Orch_ERR_NPSim["Error"] --> OrchEndError ValidateNPResults -- Ja --> CheckGeminiConfig{"Gemini API OK?"} CheckGeminiConfig -- Nein --> SkipGemini["Nutze NP-Bericht<br>als Fallback"] --> OrchEndSuccess(Result) CheckGeminiConfig -- Ja --> Step3_SynthesizeResponse["Schritt 3:<br>generate_final_response<br>(Siehe Diagramm 4)"] Step3_SynthesizeResponse --> SynthResult{"Synthese OK?"} SynthResult -- Ja --> OrchEndSuccess SynthResult -- Nein --> Orch_ERR_Synth["Error"] --> OrchEndError end CallOrchestrator --> Orchestrator_Execute Orchestrator_Execute -- Success --> WF_ReceiveResult["Empfange Resultat"] --> UpdateGUIStatusSuccess["Update GUI Status"] --> CallDisplayResult["-> display_final_result<br>(Siehe Diagramm 5)"] --> WFThreadEndSuccess(End) Orchestrator_Execute -- Error --> WF_ReceiveError["Empfange Fehler"] --> UpdateGUIStatusError["Update GUI Status"] --> CallDisplayResult --> WFThreadEndError %% Ende dieses Diagramms
Diagramm 3: NeuroPersona Core Simulation (High-Level & Loop)
(Hier könntest du entscheiden, wie viel Detail du aus der simulate_learning_cycle
-Schleife zeigen willst. Vielleicht nur die Hauptphasen A-O als Kette?)
graph TD %% ============================================= %% == 3. NeuroPersona Core Simulation == %% ============================================= Call_run_np_simulation("Aufruf:<br>neuropersona_core.run_neuropersona_simulation") --> NP_Start(Start) NP_Start --> NP_Preprocess[preprocess_data] NP_Preprocess --> NP_InitNodes[initialize_network_nodes] NP_InitNodes --> NP_CallSimulateCycle[rufe simulate_learning_cycle] subgraph Simulation_Loop [Epochen-Schleife: simulate_learning_cycle] direction TB LoopInit["Init & Connect"] --> LoopStart["Epoche Start"] LoopStart --> Phases_A_to_O["Phasen A-O:<br>Reset, Input, Propagate,<br>Update Activation/Emotion/Values,<br>Modules, Learning, Decay,<br>Plasticity?, Consolidation?, Interpret"] Phases_A_to_O --> NextEpoch{"Nächste Epoche?"} NextEpoch -- Ja --> LoopStart NextEpoch -- Nein --> EndSimLoop["Ende Epochen"] end NP_CallSimulateCycle --> Simulation_Loop Simulation_Loop --> ReceiveSimResults["Empfange Ergebnisse"] ReceiveSimResults --> NP_GenerateReport["generate_final_report"] --> ReceiveReportAndStruct["Empfange Bericht & Struct"] ReceiveReportAndStruct --> CheckPlots{"Plots?"} CheckPlots -- Ja --> NP_GeneratePlots["Generiere Plots"] --> PostPlotActions CheckPlots -- Nein --> PostPlotActions PostPlotActions --> CheckSaveState{"Speichern?"} CheckSaveState -- Ja --> NP_SaveState[save_final_network_state] --> NP_CreateHTML[create_html_report] CheckSaveState -- Nein --> NP_CreateHTML NP_CreateHTML --> ReturnNPResults["Return Bericht & Struct"] --> NP_End(End) %% Ende dieses Diagramms
Diagramm 4: Antwort-Synthese (Gemini)
graph TD %% ============================================= %% == 4. Antwort-Synthese (Gemini) == %% ============================================= %% == 1. NODES DEFINITION == Call_generate_final_response("Aufruf:<br>orchestrator.generate_final_response") InputCollectionPhase["1. Input Sammlung"] PromptCreationPhase["2. Prompt Erstellung"] GeminiProcessingPhase["3. Gemini API Verarbeitung"] ResultHandlingPhase["4. Ergebnis Verarbeitung"] ToDiagram2_Success["(Zurück zu Diagramm 2 - Erfolg)"] ToDiagram2_Error["(Zurück zu Diagramm 2 - Fehler)"] %% Nodes within Input_Sammlung Subgraph subgraph Input_Sammlung_Details [Details: Input Sammlung] direction LR IS_UserInput["Original User Prompt"] IS_NPReport["NeuroPersona Bericht<br>(Kontext)"] IS_NPStruct["Strukturierte NP Ergebnisse<br>(Dominanz, Modullevel...)"] end %% Nodes within Prompt_Erstellung Subgraph subgraph Prompt_Erstellung_Details [Details: Prompt Erstellung] direction TB PE_ExtractKeyResults["Extrahiere dominante Kat.,<br>Modullevel aus NP Struct"] PE_AssemblePrompt["Baue Prompt zusammen:<br>Inputs + **Instruktionen**(...)"] end %% Nodes within Gemini_Verarbeitung Subgraph subgraph Gemini_Verarbeitung_Details [Details: Gemini API Verarbeitung - Blackbox] direction TB GV_CallGeminiAPI["Sende Prompt an<br>Gemini API"] GV_InternalGeminiProcess{{"**Gemini LLM<br>(Inferenz)**<br><br><i>Verarbeitet Prompt...</i>"}} end %% Nodes within Ergebnis_Handhabung Subgraph subgraph Ergebnis_Handhabung_Details [Details: Ergebnis Verarbeitung] direction TB EH_HandleGeminiResponse{"Empfange Antwort<br>von API"} EH_CheckResponse{"Antwort OK<br>& nicht blockiert?"} EH_ExtractText["Extrahiere finalen<br>Antwort-Text"] EH_FormatError["Formatiere<br>Fehlermeldung"] EH_EndSuccess(Finaler Antworttext) EH_EndError(Fehlermeldung) end %% == 2. LINKS DEFINITION == %% Hauptfluss zwischen den Phasen Call_generate_final_response --> InputCollectionPhase InputCollectionPhase --> PromptCreationPhase PromptCreationPhase --> GeminiProcessingPhase GeminiProcessingPhase --> ResultHandlingPhase %% Verbindungen zu und innerhalb Subgraph 1: Input Sammlung %% Von Hauptphase zum ersten internen Knoten InputCollectionPhase --> IS_UserInput IS_UserInput --> IS_NPReport IS_NPReport --> IS_NPStruct %% Verbindungen zu und innerhalb Subgraph 2: Prompt Erstellung %% Von Hauptphase zum ersten internen Knoten PromptCreationPhase --> PE_ExtractKeyResults PE_ExtractKeyResults --> PE_AssemblePrompt %% Verbindungen zu und innerhalb Subgraph 3: Gemini Verarbeitung %% Von Hauptphase zum ersten internen Knoten GeminiProcessingPhase --> GV_CallGeminiAPI GV_CallGeminiAPI --> GV_InternalGeminiProcess %% Verbindungen zu und innerhalb Subgraph 4: Ergebnis Handhabung %% Von Hauptphase zum ersten internen Knoten ResultHandlingPhase --> EH_HandleGeminiResponse EH_HandleGeminiResponse --> EH_CheckResponse EH_CheckResponse -- Ja --> EH_ExtractText EH_ExtractText --> EH_EndSuccess EH_CheckResponse -- Nein --> EH_FormatError EH_FormatError --> EH_EndError %% Finale Verbindungen zu den Endpunkten EH_EndSuccess --> ToDiagram2_Success EH_EndError --> ToDiagram2_Error %% Ende dieses Diagramms
Diagramm 5: GUI Post-Workflow
graph TD %% ============================================= %% == 5. GUI Post-Workflow == %% ============================================= FromWorkflowThread("... vom Workflow Thread<br>(via root.after)") --> DisplayResultWindow["display_final_result:<br>Zeige Ergebnis-Fenster"] DisplayResultWindow --> UserClosesWindow{"User schließt Fenster?"} UserClosesWindow --> ReEnableStartButton["Reaktiviere Buttons<br>(im finally Block des Threads)"] ReEnableStartButton --> GUIIdle["GUI wieder bereit"] %% Ende dieses Diagramms
There are no datasets linked
There are no datasets linked