Phase 10 Sprach-Interface + Phase 9 Nacharbeiten

Voice (Phase 10):
- voice.rs: OpenAI Whisper (STT) + TTS Backend
- ChatPanel: Mikrofon-Button, VAD (Pause 1.5s), Live-Pegel
- SettingsPanel: OpenAI-Key Konfiguration

Phase 9 Nacharbeiten:
- Auto-Extract vor Compacting (Entscheidungen/TODOs/Insights)
- get_tool_hints() - relevante KB-Eintraege bei Tool-Start
- activeKnowledgeHints Store, Anzeige im KnowledgePanel

Tech-Schulden:
- Dead-Code in memory.rs entfernt (MemorySystem struct)
- cargo-check Warnings behoben

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Eddy 2026-04-14 18:24:28 +02:00
parent 51239d6639
commit f51241efa6
17 changed files with 1163 additions and 119 deletions

View file

@ -33,6 +33,7 @@ Stand: 14.04.2026
| **Session-Management (Phase 6)** | ✅ | abaf4eb | | **Session-Management (Phase 6)** | ✅ | abaf4eb |
| **Claude-DB Integration (Phase 8)** | ✅ | e6bd0de | | **Claude-DB Integration (Phase 8)** | ✅ | e6bd0de |
| **Context-Management (Phase 9)** | ✅ | eb91e54 | | **Context-Management (Phase 9)** | ✅ | eb91e54 |
| **Sprach-Interface (Phase 10)** | ✅ | 14.04.2026 |
--- ---
@ -274,9 +275,15 @@ Compacting ist **notwendig** (Token-Limit, Kosten, Latenz), aber dabei geht krit
### Noch offen (niedrigere Priorität) ### Noch offen (niedrigere Priorität)
- [ ] **Auto-Extraction vor Compacting** — Hook automatisch auslösen - [x] **Auto-Extraction vor Compacting** — Hook automatisch auslösen ✅ (14.04.2026)
- `performCompacting()` ruft jetzt `extract_context_before_compacting()` auf
- Entscheidungen, TODOs, Key-Insights werden vor Compacting archiviert
- [ ] **Validation** — Prüfen ob Claude den Context nutzt - [ ] **Validation** — Prüfen ob Claude den Context nutzt
- [ ] **Wissens-Hints** — On-demand aus claude-db laden - [x] **Wissens-Hints** — On-demand aus claude-db laden ✅ (14.04.2026)
- `get_tool_hints()` lädt relevante Einträge bei Tool-Start
- Intelligentes Keyword-Mapping (npm, git, docker, dolibarr, etc.)
- `activeKnowledgeHints` Store im Frontend
- Anzeige im KnowledgePanel
### Verifikation ### Verifikation
```bash ```bash
@ -287,40 +294,56 @@ Compacting ist **notwendig** (Token-Limit, Kosten, Latenz), aber dabei geht krit
--- ---
## Phase 10: Sprach-Interface (Optional) ## Phase 10: Sprach-Interface ✅ ERLEDIGT
> **Implementiert:** 14.04.2026
### Technologie-Stack ### Technologie-Stack
| Komponente | Technologie | Ort | | Komponente | Technologie | Ort |
|------------|-------------|-----| |------------|-------------|-----|
| Speech-to-Text | Whisper.cpp | Lokal | | Speech-to-Text | OpenAI Whisper API | Cloud |
| Voice Activity Detection | Silero VAD | Lokal | | Voice Activity Detection | Custom (Audio Level) | Browser |
| Text-to-Speech | OpenAI TTS API | Cloud | | Text-to-Speech | OpenAI TTS API | Cloud |
| Audio-Capture | Web Audio API | Browser | | Audio-Capture | Web Audio API | Browser |
### Aufgaben ### Implementiert
- [ ] **Whisper Integration** - [x] **Whisper Integration**
- [ ] whisper.cpp als Tauri-Sidecar oder WASM - [x] OpenAI Whisper API für STT
- [ ] Streaming-Transkription - [x] Deutsch als Default-Sprache
- [ ] Deutsch-Modell (small oder medium) - [x] Audio-Upload als multipart/form-data
- [ ] **VAD (Voice Activity Detection)** - [x] **VAD (Voice Activity Detection)**
- [ ] Erkennt wann User aufhort zu sprechen - [x] Audio-Level-basierte Stille-Erkennung
- [ ] Pause > 1.5s → Nachricht senden - [x] Pause > 1.5s → Aufnahme automatisch stoppen
- [x] Konfigurierbare Schwellwerte
- [ ] **TTS (Text-to-Speech)** - [x] **TTS (Text-to-Speech)**
- [ ] OpenAI TTS API Integration - [x] OpenAI TTS API Integration
- [ ] Streaming-Wiedergabe - [x] 6 Stimmen verfügbar (Alloy, Echo, Fable, Onyx, Nova, Shimmer)
- [ ] Interrupt bei User-Sprache - [x] Audio als Base64 zurückgeben
- [ ] **UI** - [x] **UI**
- [ ] Mikrofon-Button in Chat - [x] Mikrofon-Button neben Send-Button
- [ ] Pegel-Anzeige - [x] Echtzeit-Pegel-Anzeige (animiert)
- [ ] Transkript live anzeigen - [x] Aufnahme-Status (pulsierend)
- [x] Live-Transkript-Anzeige
### Aufwand ### Dateien
Gross — eigenes Teilprojekt, 2-3 Wochen
- `src-tauri/src/voice.rs` — Backend für STT/TTS
- `src/lib/components/ChatPanel.svelte` — UI + Audio-Capture
### Konfiguration
Benötigt `OPENAI_API_KEY` Umgebungsvariable für Whisper + TTS.
### Zukünftige Verbesserungen
- [ ] Lokales Whisper.cpp als Alternative (offline-fähig)
- [ ] Streaming-TTS für längere Texte
- [ ] Push-to-Talk Modus
--- ---
@ -1184,8 +1207,8 @@ END;
## Technische Schulden ## Technische Schulden
- [ ] Dead Code in `memory.rs` (MemorySystem struct ungenutzt) - [x] Dead Code in `memory.rs` (MemorySystem struct entfernt) ✅ (14.04.2026)
- [ ] Warnings bei `cargo check` beheben - [x] Warnings bei `cargo check` beheben ✅ (14.04.2026)
- [ ] TypeScript strict mode aktivieren - [ ] TypeScript strict mode aktivieren
- [ ] E2E Tests mit Playwright - [ ] E2E Tests mit Playwright
- [ ] CI/CD Pipeline (Forgejo Runner) - [ ] CI/CD Pipeline (Forgejo Runner)

View file

@ -19,9 +19,71 @@ let activeAbort = null;
let currentAgentId = null; let currentAgentId = null;
let currentModel = process.env.CLAUDE_MODEL || 'opus'; let currentModel = process.env.CLAUDE_MODEL || 'opus';
// Agent-Modus (solo | handlanger | experten | auto)
let agentMode = 'solo';
// Sticky Context (Schicht 1) — wird bei JEDEM API-Call injiziert // Sticky Context (Schicht 1) — wird bei JEDEM API-Call injiziert
let stickyContext = ''; let stickyContext = '';
// ============ Orchestrator Prompts ============
const ORCHESTRATOR_PROMPTS = {
handlanger: `
Du bist der HAUPT-AGENT und arbeitest im HANDLANGER-MODUS.
WICHTIG: Du denkst und planst, aber Sub-Agents führen aus!
Wenn du eine Aufgabe bekommst:
1. ANALYSIERE was nötig ist
2. DELEGIERE an passende Sub-Agents mit EXAKTEN Anweisungen
3. Sub-Agents führen GENAU aus, was du sagst sie denken NICHT selbst
4. Du erhältst ZUSAMMENFASSUNGEN zurück (keine Rohdaten)
5. Du entscheidest den nächsten Schritt
Beispiel-Delegationen:
- "Lies Datei X, gib mir Zeilen 10-50 zurück"
- "Suche nach 'handleError' in src/, liste die Dateien"
- "Führe 'npm test' aus, berichte nur ob passed/failed"
Halte deinen Context klein lass Sub-Agents die Details bearbeiten!
`,
experten: `
Du bist der HAUPT-AGENT und arbeitest im EXPERTEN-MODUS.
WICHTIG: Du koordinierst autonome Experten-Agents!
Deine Experten:
- **Research**: Durchsucht Code, findet Informationen, PLANT SELBST wie er sucht
- **Implement**: Schreibt Code, ENTSCHEIDET SELBST wie er es macht (Best Practices)
- **Test**: Schreibt und führt Tests aus, WÄHLT SELBST passende Testfälle
- **Review**: Prüft Code, FINDET SELBST Probleme
Wenn du eine Aufgabe bekommst:
1. TEILE sie in Experten-Bereiche auf
2. DELEGIERE an den passenden Experten mit dem WAS, nicht dem WIE
3. Der Experte arbeitet AUTONOM und liefert eine Zusammenfassung
4. Du INTEGRIERST die Ergebnisse
Beispiel-Delegationen:
- Research: "Finde heraus wie Authentication in diesem Projekt implementiert ist"
- Implement: "Füge OAuth2-Support hinzu" (ohne exakte Code-Vorgabe)
- Test: "Teste die neue Auth-Funktionalität"
- Review: "Prüfe die OAuth-Implementierung auf Sicherheitsprobleme"
`,
auto: `
Du analysierst Aufgaben und wählst den optimalen Arbeitsmodus.
Entscheide basierend auf:
- SOLO: Einfache, schnelle Aufgaben (Typo fix, Code erklären, einzelne Datei ändern)
- HANDLANGER: Koordinations-intensive Aufgaben (viele Dateien lesen, Bug in großer Codebase)
- EXPERTEN: Komplexe Features (neues System implementieren, großes Refactoring)
Teile deine Wahl am Anfang mit: "[Modus: X] Begründung"
`,
};
// Subagent-Tracking // Subagent-Tracking
// Map: toolUseId → { agentId, parentId, type, task, depth } // Map: toolUseId → { agentId, parentId, type, task, depth }
const activeSubagents = new Map(); const activeSubagents = new Map();
@ -159,12 +221,23 @@ async function sendMessage(message, requestId, model = null, contextOverride = n
resumeSessionId: resumeSessionId || null, resumeSessionId: resumeSessionId || null,
}); });
sendResponse(requestId, { agentId: currentAgentId, status: 'gestartet', model: useModel, resuming: isResuming }); sendResponse(requestId, { agentId: currentAgentId, status: 'gestartet', model: useModel, resuming: isResuming, mode: agentMode });
// Nachricht mit Context kombinieren // Orchestrator-Prompt für nicht-Solo Modi
const fullPrompt = useContext let orchestratorPrompt = '';
? `${useContext}\n\n---\n\n${message}` if (agentMode !== 'solo' && ORCHESTRATOR_PROMPTS[agentMode]) {
: message; orchestratorPrompt = ORCHESTRATOR_PROMPTS[agentMode];
sendMonitorEvent('agent', `Orchestrator-Modus: ${agentMode}`, { mode: agentMode });
}
// Nachricht mit Context und Orchestrator kombinieren
let fullPrompt = message;
if (orchestratorPrompt) {
fullPrompt = `${orchestratorPrompt}\n\n---\n\n${message}`;
}
if (useContext) {
fullPrompt = `${useContext}\n\n---\n\n${fullPrompt}`;
}
const startTime = Date.now(); const startTime = Date.now();
let fullText = ''; let fullText = '';
@ -413,9 +486,27 @@ function handleCommand(msg) {
}); });
break; break;
case 'set-mode':
// Agent-Modus setzen (solo, handlanger, experten, auto)
const validModes = ['solo', 'handlanger', 'experten', 'auto'];
if (!msg.mode || !validModes.includes(msg.mode)) {
sendError(msg.id, `Ungültiger Modus: ${msg.mode}. Verfügbar: ${validModes.join(', ')}`);
return;
}
agentMode = msg.mode;
sendResponse(msg.id, { mode: agentMode, status: 'Modus geändert' });
sendEvent('mode-changed', { mode: agentMode });
sendMonitorEvent('agent', `Agent-Modus geändert: ${agentMode}`, { mode: agentMode });
break;
case 'get-mode':
sendResponse(msg.id, { mode: agentMode });
break;
case 'status': case 'status':
sendResponse(msg.id, { sendResponse(msg.id, {
model: currentModel, model: currentModel,
mode: agentMode,
isProcessing: !!currentAgentId, isProcessing: !!currentAgentId,
availableModels: AVAILABLE_MODELS, availableModels: AVAILABLE_MODELS,
}); });

View file

@ -22,6 +22,8 @@ chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1", features = ["v4", "serde"] } uuid = { version = "1", features = ["v4", "serde"] }
rusqlite = { version = "0.31", features = ["bundled"] } rusqlite = { version = "0.31", features = ["bundled"] }
mysql_async = "0.34" mysql_async = "0.34"
reqwest = { version = "0.12", features = ["json", "multipart"] }
base64 = "0.22"
[profile.release] [profile.release]
panic = "abort" panic = "abort"

View file

@ -45,12 +45,14 @@ pub struct AuditEntry {
pub session_id: Option<String>, pub session_id: Option<String>,
} }
/// Audit-Log Manager /// Audit-Log Manager (für zukünftige In-Memory-Nutzung)
#[allow(dead_code)]
#[derive(Debug, Default)] #[derive(Debug, Default)]
pub struct AuditLog { pub struct AuditLog {
entries: Vec<AuditEntry>, entries: Vec<AuditEntry>,
} }
#[allow(dead_code)]
impl AuditLog { impl AuditLog {
pub fn new() -> Self { pub fn new() -> Self {
Self { entries: vec![] } Self { entries: vec![] }

View file

@ -7,7 +7,6 @@ use std::process::{Command, Stdio};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use tauri::{AppHandle, Emitter, Manager}; use tauri::{AppHandle, Emitter, Manager};
use crate::context;
use crate::db; use crate::db;
/// Status eines Agents /// Status eines Agents
@ -49,6 +48,7 @@ struct BridgeMessage {
payload: Option<serde_json::Value>, payload: Option<serde_json::Value>,
id: Option<String>, id: Option<String>,
result: Option<serde_json::Value>, result: Option<serde_json::Value>,
#[allow(dead_code)]
error: Option<String>, error: Option<String>,
} }
@ -262,11 +262,6 @@ fn send_to_bridge(app: &AppHandle, command: &str, message: &str) -> Result<Strin
send_to_bridge_full(app, command, message, None, None) send_to_bridge_full(app, command, message, None, None)
} }
/// Befehl an Bridge senden mit optionalem Context
fn send_to_bridge_with_context(app: &AppHandle, command: &str, message: &str, context: Option<String>) -> Result<String, String> {
send_to_bridge_full(app, command, message, context, None)
}
/// Befehl an Bridge senden mit Context und Resume-Session-ID /// Befehl an Bridge senden mit Context und Resume-Session-ID
fn send_to_bridge_full( fn send_to_bridge_full(
app: &AppHandle, app: &AppHandle,
@ -288,6 +283,11 @@ fn send_to_bridge_full(
"id": request_id, "id": request_id,
"model": message "model": message
}), }),
"set-mode" => serde_json::json!({
"command": command,
"id": request_id,
"mode": message
}),
"message" => { "message" => {
let mut payload = serde_json::json!({ let mut payload = serde_json::json!({
"command": command, "command": command,
@ -513,6 +513,53 @@ pub async fn get_current_model(app: AppHandle) -> Result<String, String> {
Ok("opus".to_string()) Ok("opus".to_string())
} }
/// Agent-Modus setzen (solo, handlanger, experten, auto)
#[tauri::command]
pub async fn set_agent_mode(app: AppHandle, mode: String) -> Result<String, String> {
let valid_modes = ["solo", "handlanger", "experten", "auto"];
if !valid_modes.contains(&mode.as_str()) {
return Err(format!("Ungültiger Modus: {}. Verfügbar: {}", mode, valid_modes.join(", ")));
}
println!("🔄 Agent-Modus wechseln zu: {}", mode);
// Modus in Settings speichern
if let Some(db_state) = app.try_state::<Arc<Mutex<crate::db::Database>>>() {
let db = db_state.lock().unwrap();
let _ = db.set_setting("agent_mode", &mode);
}
// Bridge starten falls nicht aktiv
let needs_start = {
let state = app.state::<Arc<Mutex<ClaudeState>>>();
let state_guard = state.lock().unwrap();
state_guard.bridge_stdin.is_none()
};
if needs_start {
start_bridge(&app)?;
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
}
// Modus an Bridge senden
send_to_bridge(&app, "set-mode", &mode)?;
Ok(mode)
}
/// Aktuellen Agent-Modus aus Settings laden
#[tauri::command]
pub async fn get_agent_mode(app: AppHandle) -> Result<String, String> {
if let Some(db_state) = app.try_state::<Arc<Mutex<crate::db::Database>>>() {
let db = db_state.lock().unwrap();
if let Ok(Some(mode)) = db.get_setting("agent_mode") {
return Ok(mode);
}
}
// Default: solo
Ok("solo".to_string())
}
/// Modell-Info Struct /// Modell-Info Struct
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)] #[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct ModelInfo { pub struct ModelInfo {

View file

@ -2,7 +2,6 @@
// Drei-Schichten-Gedächtnis für kritischen Kontext // Drei-Schichten-Gedächtnis für kritischen Kontext
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::sync::{Arc, Mutex};
use tauri::{AppHandle, Manager}; use tauri::{AppHandle, Manager};
use crate::db::{Database, DbState}; use crate::db::{Database, DbState};
@ -74,7 +73,8 @@ pub struct ExtractedContext {
pub mentioned_tools: Vec<String>, pub mentioned_tools: Vec<String>,
} }
/// Wissens-Hint (Schicht 3, on-demand) /// Wissens-Hint (Schicht 3, on-demand) — für zukünftige Wissens-Hints
#[allow(dead_code)]
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct KnowledgeHint { pub struct KnowledgeHint {
pub title: String, pub title: String,
@ -122,6 +122,7 @@ impl StickyContext {
} }
/// Geschätzte Token-Anzahl /// Geschätzte Token-Anzahl
#[allow(dead_code)]
pub fn estimate_tokens(&self) -> usize { pub fn estimate_tokens(&self) -> usize {
// Grobe Schätzung: ~4 Zeichen pro Token // Grobe Schätzung: ~4 Zeichen pro Token
self.render().len() / 4 self.render().len() / 4
@ -167,6 +168,7 @@ impl ProjectContext {
} }
/// Geschätzte Token-Anzahl /// Geschätzte Token-Anzahl
#[allow(dead_code)]
pub fn estimate_tokens(&self) -> usize { pub fn estimate_tokens(&self) -> usize {
self.render().len() / 4 self.render().len() / 4
} }

View file

@ -50,6 +50,7 @@ pub struct MonitorEvent {
pub agent_id: Option<String>, pub agent_id: Option<String>,
pub session_id: Option<String>, pub session_id: Option<String>,
pub duration_ms: Option<i64>, pub duration_ms: Option<i64>,
#[allow(dead_code)]
pub error: Option<String>, pub error: Option<String>,
} }
@ -340,6 +341,7 @@ impl Database {
// ============ Memory ============ // ============ Memory ============
/// Speichert einen Memory-Eintrag /// Speichert einen Memory-Eintrag
#[allow(dead_code)]
pub fn save_memory_entry(&self, entry: &MemoryEntry) -> SqlResult<()> { pub fn save_memory_entry(&self, entry: &MemoryEntry) -> SqlResult<()> {
self.conn.execute( self.conn.execute(
"INSERT OR REPLACE INTO memory (id, category, key, value, sticky, auto_load, last_used, use_count) "INSERT OR REPLACE INTO memory (id, category, key, value, sticky, auto_load, last_used, use_count)
@ -386,6 +388,7 @@ impl Database {
} }
/// Löscht einen Memory-Eintrag /// Löscht einen Memory-Eintrag
#[allow(dead_code)]
pub fn delete_memory_entry(&self, id: &str) -> SqlResult<()> { pub fn delete_memory_entry(&self, id: &str) -> SqlResult<()> {
self.conn.execute("DELETE FROM memory WHERE id = ?1", params![id])?; self.conn.execute("DELETE FROM memory WHERE id = ?1", params![id])?;
Ok(()) Ok(())

View file

@ -41,7 +41,8 @@ pub enum PermissionAction {
Deny, Deny,
} }
/// Anfrage zur Freigabe /// Anfrage zur Freigabe (für zukünftiges Permission-Popup)
#[allow(dead_code)]
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PermissionRequest { pub struct PermissionRequest {
pub id: String, pub id: String,
@ -53,7 +54,8 @@ pub struct PermissionRequest {
pub suggested_pattern: Option<String>, pub suggested_pattern: Option<String>,
} }
/// Antwort auf Freigabe-Anfrage /// Antwort auf Freigabe-Anfrage (für zukünftiges Permission-Popup)
#[allow(dead_code)]
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PermissionResponse { pub struct PermissionResponse {
pub request_id: String, pub request_id: String,
@ -297,6 +299,7 @@ impl GuardRails {
} }
/// Session-Permissions löschen /// Session-Permissions löschen
#[allow(dead_code)]
pub fn clear_session(&mut self) { pub fn clear_session(&mut self) {
self.session_permissions.clear(); self.session_permissions.clear();
} }

View file

@ -255,6 +255,147 @@ pub async fn get_recent_knowledge(
Ok(entries) Ok(entries)
} }
/// Wissens-Hints für ein Tool/Kommando laden
/// Sucht relevante Einträge basierend auf Tool-Name und Kommando
#[tauri::command]
pub async fn get_tool_hints(
tool: String,
command: Option<String>,
context: Option<String>,
) -> Result<Vec<KnowledgeEntry>, String> {
let pool = create_pool();
let mut conn = pool.get_conn().await.map_err(|e| e.to_string())?;
// Suchbegriffe aus Tool + Command + Context zusammenbauen
let mut search_terms = vec![tool.clone()];
// Tool-spezifische Kategorien mappen
let category = match tool.as_str() {
"Bash" => {
if let Some(ref cmd) = command {
// Relevante Begriffe aus Bash-Kommando extrahieren
if cmd.contains("npm") || cmd.contains("node") { search_terms.push("npm".to_string()); }
if cmd.contains("git") { search_terms.push("git".to_string()); }
if cmd.contains("docker") { search_terms.push("docker".to_string()); }
if cmd.contains("cargo") { search_terms.push("cargo".to_string()); search_terms.push("rust".to_string()); }
if cmd.contains("dolibarr") { search_terms.push("dolibarr".to_string()); }
if cmd.contains("mysql") { search_terms.push("mysql".to_string()); search_terms.push("sql".to_string()); }
}
None // Keine spezifische Kategorie
}
"Read" | "Write" | "Edit" => {
if let Some(ref cmd) = command {
// Aus Dateipfad relevante Begriffe extrahieren
if cmd.contains("dolibarr") { search_terms.push("dolibarr".to_string()); }
if cmd.contains(".php") { search_terms.push("php".to_string()); }
if cmd.contains(".rs") { search_terms.push("rust".to_string()); }
if cmd.contains(".ts") || cmd.contains(".svelte") { search_terms.push("svelte".to_string()); }
}
None
}
_ => None,
};
// Optional: Context-Begriffe hinzufügen
if let Some(ref ctx) = context {
// Wichtige Begriffe aus Context extrahieren (max 3)
for word in ctx.split_whitespace().take(10) {
if word.len() > 4 && !search_terms.contains(&word.to_lowercase()) {
search_terms.push(word.to_lowercase());
}
}
}
// Suchquery bauen
let query_string = search_terms.join(" ");
// Suche mit Volltext und optionalem Kategorie-Filter
let entries: Vec<KnowledgeEntry> = if let Some(cat) = category {
conn.exec_map(
r#"SELECT id, category, title, content, tags, priority, status,
related_ids, source, created_at, updated_at
FROM knowledge
WHERE status = 'active'
AND category = ?
AND MATCH(title, content, tags) AGAINST(? IN NATURAL LANGUAGE MODE)
ORDER BY priority ASC, updated_at DESC
LIMIT 3"#,
(&cat, &query_string),
|(id, category, title, content, tags, priority, status, related_ids, source, created_at, updated_at):
(i64, String, String, String, Option<String>, i32, String, Option<String>, Option<String>, String, String)| {
KnowledgeEntry {
id, category, title, content, tags, priority, status,
related_ids, source, created_at, updated_at,
}
}
).await.map_err(|e| e.to_string())?
} else {
conn.exec_map(
r#"SELECT id, category, title, content, tags, priority, status,
related_ids, source, created_at, updated_at
FROM knowledge
WHERE status = 'active'
AND MATCH(title, content, tags) AGAINST(? IN NATURAL LANGUAGE MODE)
ORDER BY priority ASC, updated_at DESC
LIMIT 3"#,
(&query_string,),
|(id, category, title, content, tags, priority, status, related_ids, source, created_at, updated_at):
(i64, String, String, String, Option<String>, i32, String, Option<String>, Option<String>, String, String)| {
KnowledgeEntry {
id, category, title, content, tags, priority, status,
related_ids, source, created_at, updated_at,
}
}
).await.map_err(|e| e.to_string())?
};
drop(conn);
pool.disconnect().await.map_err(|e| e.to_string())?;
if !entries.is_empty() {
println!("💡 {} Wissens-Hints geladen für Tool '{}': {:?}",
entries.len(),
tool,
entries.iter().map(|e| &e.title).collect::<Vec<_>>()
);
}
Ok(entries)
}
/// Wissens-Hints als formatierter Kontext-Block
#[tauri::command]
pub async fn format_tool_hints(
tool: String,
command: Option<String>,
context: Option<String>,
) -> Result<String, String> {
let entries = get_tool_hints(tool.clone(), command, context).await?;
if entries.is_empty() {
return Ok(String::new());
}
let mut hints = Vec::new();
hints.push("<knowledge-hints>".to_string());
hints.push(format!("Relevante Informationen für {}:", tool));
for entry in entries {
hints.push(format!("\n**{}** ({})", entry.title, entry.category));
// Content auf ~300 Zeichen kürzen
let content = if entry.content.len() > 300 {
format!("{}...", &entry.content[..300])
} else {
entry.content
};
hints.push(content);
}
hints.push("</knowledge-hints>".to_string());
Ok(hints.join("\n"))
}
/// Verbindung zur Wissensbasis testen /// Verbindung zur Wissensbasis testen
#[tauri::command] #[tauri::command]
pub async fn test_knowledge_connection() -> Result<String, String> { pub async fn test_knowledge_connection() -> Result<String, String> {

View file

@ -16,6 +16,7 @@ mod guard;
mod knowledge; mod knowledge;
mod memory; mod memory;
mod session; mod session;
mod voice;
/// Initialisiert die App /// Initialisiert die App
#[cfg_attr(mobile, tauri::mobile_entry_point)] #[cfg_attr(mobile, tauri::mobile_entry_point)]
@ -32,6 +33,8 @@ pub fn run() {
claude::set_model, claude::set_model,
claude::get_available_models, claude::get_available_models,
claude::get_current_model, claude::get_current_model,
claude::set_agent_mode,
claude::get_agent_mode,
claude::init_sticky_context, claude::init_sticky_context,
// Gedächtnis-System // Gedächtnis-System
memory::load_memory, memory::load_memory,
@ -82,6 +85,8 @@ pub fn run() {
knowledge::get_knowledge_categories, knowledge::get_knowledge_categories,
knowledge::get_recent_knowledge, knowledge::get_recent_knowledge,
knowledge::test_knowledge_connection, knowledge::test_knowledge_connection,
knowledge::get_tool_hints,
knowledge::format_tool_hints,
// Context-Management // Context-Management
context::get_sticky_context, context::get_sticky_context,
context::set_sticky_context, context::set_sticky_context,
@ -91,6 +96,11 @@ pub fn run() {
context::log_context_failure, context::log_context_failure,
context::get_full_context, context::get_full_context,
context::list_sticky_context, context::list_sticky_context,
// Voice-Interface
voice::transcribe_audio,
voice::text_to_speech,
voice::check_voice_availability,
voice::get_tts_voices,
]) ])
.setup(|app| { .setup(|app| {
let handle = app.handle().clone(); let handle = app.handle().clone();

View file

@ -32,67 +32,7 @@ pub struct MemoryEntry {
pub use_count: u32, pub use_count: u32,
} }
/// Das Gedächtnis-System // MemorySystem Struct entfernt - Dead Code, Funktionalität läuft über Tauri-Commands
#[derive(Debug, Default)]
pub struct MemorySystem {
entries: HashMap<String, MemoryEntry>,
loaded_from_db: bool,
}
impl MemorySystem {
pub fn new() -> Self {
Self {
entries: HashMap::new(),
loaded_from_db: false,
}
}
/// Fügt einen Eintrag hinzu
pub fn add(&mut self, entry: MemoryEntry) {
self.entries.insert(entry.id.clone(), entry);
}
/// Holt einen Eintrag
pub fn get(&self, id: &str) -> Option<&MemoryEntry> {
self.entries.get(id)
}
/// Holt alle Einträge einer Kategorie
pub fn get_by_category(&self, category: ContextCategory) -> Vec<&MemoryEntry> {
self.entries
.values()
.filter(|e| e.category == category)
.collect()
}
/// Holt alle Sticky-Einträge (für Kontext-Injection)
pub fn get_sticky_context(&self) -> Vec<&MemoryEntry> {
self.entries.values().filter(|e| e.sticky).collect()
}
/// Holt alle Auto-Load-Einträge
pub fn get_auto_load(&self) -> Vec<&MemoryEntry> {
self.entries.values().filter(|e| e.auto_load).collect()
}
/// Statistiken
pub fn stats(&self) -> MemoryStats {
MemoryStats {
total: self.entries.len(),
sticky: self.entries.values().filter(|e| e.sticky).count(),
by_category: self.count_by_category(),
}
}
fn count_by_category(&self) -> HashMap<String, usize> {
let mut counts = HashMap::new();
for entry in self.entries.values() {
let cat = format!("{:?}", entry.category);
*counts.entry(cat).or_insert(0) += 1;
}
counts
}
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct MemoryStats { pub struct MemoryStats {

188
src-tauri/src/voice.rs Normal file
View file

@ -0,0 +1,188 @@
// Claude Desktop — Voice Interface
// Speech-to-Text mit Whisper API, Text-to-Speech mit OpenAI TTS
use base64::{Engine as _, engine::general_purpose::STANDARD as BASE64};
use serde::{Deserialize, Serialize};
use std::io::Write;
/// Whisper API Konfiguration
const OPENAI_API_URL: &str = "https://api.openai.com/v1/audio/transcriptions";
const TTS_API_URL: &str = "https://api.openai.com/v1/audio/speech";
/// Transkriptions-Ergebnis
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TranscriptionResult {
pub text: String,
pub language: Option<String>,
pub duration: Option<f64>,
}
/// TTS-Stimmen
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum TtsVoice {
Alloy,
Echo,
Fable,
Onyx,
Nova,
Shimmer,
}
impl TtsVoice {
fn as_str(&self) -> &str {
match self {
TtsVoice::Alloy => "alloy",
TtsVoice::Echo => "echo",
TtsVoice::Fable => "fable",
TtsVoice::Onyx => "onyx",
TtsVoice::Nova => "nova",
TtsVoice::Shimmer => "shimmer",
}
}
}
/// Holt den OpenAI API Key aus Umgebungsvariable oder Settings
fn get_openai_key() -> Result<String, String> {
// Erst Umgebungsvariable prüfen
if let Ok(key) = std::env::var("OPENAI_API_KEY") {
if !key.is_empty() {
return Ok(key);
}
}
// Alternativ: Aus Settings laden (TODO)
Err("OpenAI API Key nicht gefunden. Setze OPENAI_API_KEY Umgebungsvariable.".to_string())
}
/// Transkribiert Audio mit OpenAI Whisper API
#[tauri::command]
pub async fn transcribe_audio(
audio_base64: String,
format: String,
) -> Result<String, String> {
let api_key = get_openai_key()?;
// Base64 dekodieren
let audio_bytes = BASE64.decode(&audio_base64)
.map_err(|e| format!("Base64-Dekodierung fehlgeschlagen: {}", e))?;
// Temporäre Datei erstellen (Whisper API braucht Datei-Upload)
let temp_dir = std::env::temp_dir();
let temp_file = temp_dir.join(format!("whisper_audio_{}.{}", uuid::Uuid::new_v4(), format));
let mut file = std::fs::File::create(&temp_file)
.map_err(|e| format!("Temp-Datei erstellen fehlgeschlagen: {}", e))?;
file.write_all(&audio_bytes)
.map_err(|e| format!("Audio schreiben fehlgeschlagen: {}", e))?;
drop(file);
// Multipart-Request an Whisper API
let client = reqwest::Client::new();
let file_part = reqwest::multipart::Part::file(&temp_file)
.await
.map_err(|e| format!("Datei lesen fehlgeschlagen: {}", e))?
.file_name(format!("audio.{}", format))
.mime_str(&format!("audio/{}", format))
.map_err(|e| format!("MIME-Type fehlgeschlagen: {}", e))?;
let form = reqwest::multipart::Form::new()
.part("file", file_part)
.text("model", "whisper-1")
.text("language", "de") // Deutsch priorisieren
.text("response_format", "json");
let response = client
.post(OPENAI_API_URL)
.bearer_auth(&api_key)
.multipart(form)
.send()
.await
.map_err(|e| format!("API-Request fehlgeschlagen: {}", e))?;
// Temp-Datei löschen
let _ = std::fs::remove_file(&temp_file);
if !response.status().is_success() {
let error_text = response.text().await.unwrap_or_default();
return Err(format!("Whisper API Fehler: {}", error_text));
}
// Response parsen
#[derive(Deserialize)]
struct WhisperResponse {
text: String,
}
let result: WhisperResponse = response.json().await
.map_err(|e| format!("Response parsen fehlgeschlagen: {}", e))?;
println!("🎤 Transkription: \"{}\"", result.text);
Ok(result.text)
}
/// Text-to-Speech mit OpenAI TTS API
#[tauri::command]
pub async fn text_to_speech(
text: String,
voice: Option<String>,
) -> Result<String, String> {
let api_key = get_openai_key()?;
let voice_name = voice.unwrap_or_else(|| "nova".to_string());
let client = reqwest::Client::new();
let body = serde_json::json!({
"model": "tts-1",
"input": text,
"voice": voice_name,
"response_format": "mp3"
});
let response = client
.post(TTS_API_URL)
.bearer_auth(&api_key)
.json(&body)
.send()
.await
.map_err(|e| format!("TTS API-Request fehlgeschlagen: {}", e))?;
if !response.status().is_success() {
let error_text = response.text().await.unwrap_or_default();
return Err(format!("TTS API Fehler: {}", error_text));
}
// Audio-Bytes als Base64 zurückgeben
let audio_bytes = response.bytes().await
.map_err(|e| format!("Audio lesen fehlgeschlagen: {}", e))?;
let audio_base64 = BASE64.encode(&audio_bytes);
println!("🔊 TTS generiert: {} Zeichen → {} Bytes Audio", text.len(), audio_bytes.len());
Ok(audio_base64)
}
/// Prüft ob Voice-Features verfügbar sind (API Key vorhanden)
#[tauri::command]
pub async fn check_voice_availability() -> Result<bool, String> {
match get_openai_key() {
Ok(_) => Ok(true),
Err(_) => Ok(false),
}
}
/// Verfügbare TTS-Stimmen
#[tauri::command]
pub async fn get_tts_voices() -> Result<Vec<serde_json::Value>, String> {
Ok(vec![
serde_json::json!({ "id": "alloy", "name": "Alloy", "description": "Neutral, ausgewogen" }),
serde_json::json!({ "id": "echo", "name": "Echo", "description": "Männlich, warm" }),
serde_json::json!({ "id": "fable", "name": "Fable", "description": "Expressiv, britisch" }),
serde_json::json!({ "id": "onyx", "name": "Onyx", "description": "Tief, autoritär" }),
serde_json::json!({ "id": "nova", "name": "Nova", "description": "Weiblich, freundlich" }),
serde_json::json!({ "id": "shimmer", "name": "Shimmer", "description": "Weiblich, sanft" }),
])
}

View file

@ -98,6 +98,22 @@
const TOKEN_WARNING_THRESHOLD = 40000; // ~40k Token = Warnung zeigen const TOKEN_WARNING_THRESHOLD = 40000; // ~40k Token = Warnung zeigen
const KEEP_LAST_MESSAGES = 30; const KEEP_LAST_MESSAGES = 30;
// Voice-Interface State
let isRecording = $state(false);
let audioLevel = $state(0);
let liveTranscript = $state('');
let mediaRecorder: MediaRecorder | null = null;
let audioContext: AudioContext | null = null;
let analyser: AnalyserNode | null = null;
let audioChunks: Blob[] = [];
let levelAnimationFrame: number | null = null;
// VAD (Voice Activity Detection) — automatisches Stoppen nach Sprechpause
const VAD_SILENCE_THRESHOLD = 15; // Pegel unter dem als Stille gilt
const VAD_SILENCE_DURATION = 1500; // ms Stille vor Auto-Stopp
let silenceStartTime: number | null = null;
let vadEnabled = $state(true); // VAD ein/aus
async function scrollToBottom() { async function scrollToBottom() {
await tick(); await tick();
if (messagesContainer) { if (messagesContainer) {
@ -163,13 +179,31 @@
showCompactingDialog = false; showCompactingDialog = false;
try { try {
// Zuerst: Kritischen Kontext extrahieren und archivieren
const currentMessages = get(messages);
const messagesJson = JSON.stringify(currentMessages.map(m => ({
role: m.role,
content: m.content
})));
try {
const extracted = await invoke('extract_context_before_compacting', {
sessionId,
messagesJson
});
console.log('📦 Kontext extrahiert vor Compacting:', extracted);
} catch (extractErr) {
console.warn('Context-Extraction fehlgeschlagen (nicht kritisch):', extractErr);
}
// Dann: Compacting durchführen
const compacted: number = await invoke('compact_session', { const compacted: number = await invoke('compact_session', {
sessionId, sessionId,
keepLast: KEEP_LAST_MESSAGES keepLast: KEEP_LAST_MESSAGES
}); });
if (compacted > 0) { if (compacted > 0) {
addMessage('system', `📦 Compacting: ${compacted} ältere Nachrichten wurden zusammengefasst. Die letzten ${KEEP_LAST_MESSAGES} bleiben erhalten.`); addMessage('system', `📦 Compacting: ${compacted} ältere Nachrichten wurden zusammengefasst. Die letzten ${KEEP_LAST_MESSAGES} bleiben erhalten. Kritischer Kontext wurde archiviert.`);
} }
} catch (err) { } catch (err) {
console.error('Compacting fehlgeschlagen:', err); console.error('Compacting fehlgeschlagen:', err);
@ -182,8 +216,149 @@
// Warnung für diese Session nicht erneut zeigen // Warnung für diese Session nicht erneut zeigen
} }
// ============ Voice Interface ============
async function startRecording() {
try {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
// Audio-Analyse für Pegel-Anzeige
audioContext = new AudioContext();
analyser = audioContext.createAnalyser();
const source = audioContext.createMediaStreamSource(stream);
source.connect(analyser);
analyser.fftSize = 256;
// Pegel-Animation starten
updateAudioLevel();
// MediaRecorder für Aufnahme
mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
audioChunks = [];
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
audioChunks.push(event.data);
}
};
mediaRecorder.onstop = async () => {
// Aufnahme beendet — Audio an Whisper senden
const audioBlob = new Blob(audioChunks, { type: 'audio/webm' });
await transcribeAudio(audioBlob);
// Stream stoppen
stream.getTracks().forEach(track => track.stop());
};
mediaRecorder.start(100); // Chunks alle 100ms
isRecording = true;
liveTranscript = '';
silenceStartTime = null; // VAD-Timer zurücksetzen
console.log('🎤 Aufnahme gestartet' + (vadEnabled ? ' (VAD aktiv)' : ''));
} catch (err) {
console.error('Mikrofon-Zugriff fehlgeschlagen:', err);
addMessage('system', `⚠️ Mikrofon-Zugriff fehlgeschlagen: ${err}`);
}
}
function stopRecording() {
if (mediaRecorder && mediaRecorder.state !== 'inactive') {
mediaRecorder.stop();
}
// Pegel-Animation stoppen
if (levelAnimationFrame) {
cancelAnimationFrame(levelAnimationFrame);
levelAnimationFrame = null;
}
// Audio-Context schließen
if (audioContext) {
audioContext.close();
audioContext = null;
}
isRecording = false;
audioLevel = 0;
console.log('🎤 Aufnahme gestoppt');
}
function updateAudioLevel() {
if (!analyser || !isRecording) return;
const dataArray = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(dataArray);
// Durchschnittspegel berechnen
const average = dataArray.reduce((a, b) => a + b, 0) / dataArray.length;
audioLevel = Math.min(100, average * 1.5); // Normalisieren auf 0-100
// VAD: Stille erkennen und nach Pause automatisch stoppen
if (vadEnabled && audioChunks.length > 0) {
if (audioLevel < VAD_SILENCE_THRESHOLD) {
// Stille beginnt oder dauert an
if (silenceStartTime === null) {
silenceStartTime = Date.now();
} else if (Date.now() - silenceStartTime > VAD_SILENCE_DURATION) {
// Lange genug still — Aufnahme automatisch stoppen
console.log('🔇 VAD: Stille erkannt, stoppe Aufnahme');
stopRecording();
return;
}
} else {
// Sprache erkannt — Stille-Timer zurücksetzen
silenceStartTime = null;
}
}
levelAnimationFrame = requestAnimationFrame(updateAudioLevel);
}
async function transcribeAudio(audioBlob: Blob) {
liveTranscript = 'Transkribiere...';
try {
// Audio als Base64 für Tauri-Command
const arrayBuffer = await audioBlob.arrayBuffer();
const base64 = btoa(String.fromCharCode(...new Uint8Array(arrayBuffer)));
// An Backend senden für Whisper-Transkription
const transcript: string = await invoke('transcribe_audio', {
audioBase64: base64,
format: 'webm'
});
if (transcript && transcript.trim()) {
// Transkript in Input-Feld einfügen
$currentInput = ($currentInput + ' ' + transcript).trim();
liveTranscript = '';
console.log('📝 Transkript:', transcript);
} else {
liveTranscript = '';
}
} catch (err) {
console.error('Transkription fehlgeschlagen:', err);
liveTranscript = `Fehler: ${err}`;
// Nach 3s ausblenden
setTimeout(() => { liveTranscript = ''; }, 3000);
}
}
function toggleRecording() {
if (isRecording) {
stopRecording();
} else {
startRecording();
}
}
onDestroy(() => { onDestroy(() => {
unsubscribe(); unsubscribe();
// Voice-Aufnahme stoppen falls aktiv
if (isRecording) {
stopRecording();
}
}); });
// Globale Keyboard Shortcuts // Globale Keyboard Shortcuts
@ -512,18 +687,39 @@
</div> </div>
<div class="chat-input"> <div class="chat-input">
{#if liveTranscript}
<div class="live-transcript">
<span class="transcript-icon">🎤</span>
<span class="transcript-text">{liveTranscript}</span>
</div>
{/if}
<textarea <textarea
bind:this={inputTextarea} bind:this={inputTextarea}
bind:value={$currentInput} bind:value={$currentInput}
on:keydown={handleKeydown} on:keydown={handleKeydown}
placeholder="Nachricht eingeben... (Ctrl+K = Focus, Ctrl+Enter = Senden)" placeholder="Nachricht eingeben... (Ctrl+K = Focus, Ctrl+Enter = Senden)"
disabled={$isProcessing} disabled={$isProcessing || isRecording}
rows="3" rows="3"
></textarea> ></textarea>
<div class="input-buttons">
<button
class="mic-button"
class:recording={isRecording}
on:click={toggleRecording}
disabled={$isProcessing}
title={isRecording ? 'Aufnahme stoppen' : 'Spracheingabe starten'}
>
{#if isRecording}
<span class="mic-icon recording"></span>
<div class="audio-level" style="height: {audioLevel}%"></div>
{:else}
<span class="mic-icon">🎤</span>
{/if}
</button>
<button <button
class="send-button" class="send-button"
on:click={sendMessage} on:click={sendMessage}
disabled={!$currentInput.trim() || $isProcessing} disabled={!$currentInput.trim() || $isProcessing || isRecording}
> >
{#if $isProcessing} {#if $isProcessing}
@ -533,6 +729,7 @@
</button> </button>
</div> </div>
</div> </div>
</div>
<!-- "Das merken" Dialog --> <!-- "Das merken" Dialog -->
{#if rememberDialogOpen} {#if rememberDialogOpen}
@ -1048,6 +1245,7 @@
padding: var(--spacing-sm) var(--spacing-md); padding: var(--spacing-sm) var(--spacing-md);
background: var(--bg-secondary); background: var(--bg-secondary);
border-top: 1px solid var(--bg-tertiary); border-top: 1px solid var(--bg-tertiary);
position: relative;
} }
.chat-input textarea { .chat-input textarea {
@ -1084,6 +1282,100 @@
transform: none; transform: none;
} }
/* Voice Interface */
.input-buttons {
display: flex;
flex-direction: column;
gap: var(--spacing-xs);
}
.mic-button {
width: 48px;
height: 48px;
display: flex;
align-items: center;
justify-content: center;
background: var(--bg-secondary);
border: 1px solid var(--border);
border-radius: var(--radius-md);
font-size: 1.25rem;
cursor: pointer;
transition: all 0.2s ease;
position: relative;
overflow: hidden;
}
.mic-button:hover:not(:disabled) {
background: var(--bg-tertiary);
border-color: var(--accent);
}
.mic-button.recording {
background: rgba(239, 68, 68, 0.15);
border-color: #ef4444;
animation: pulse-recording 1.5s ease-in-out infinite;
}
@keyframes pulse-recording {
0%, 100% { box-shadow: 0 0 0 0 rgba(239, 68, 68, 0.4); }
50% { box-shadow: 0 0 0 8px rgba(239, 68, 68, 0); }
}
.mic-icon {
z-index: 1;
}
.mic-icon.recording {
color: #ef4444;
}
.audio-level {
position: absolute;
bottom: 0;
left: 0;
right: 0;
background: linear-gradient(to top, rgba(239, 68, 68, 0.4), rgba(239, 68, 68, 0.1));
transition: height 0.05s ease-out;
pointer-events: none;
}
.mic-button:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.live-transcript {
position: absolute;
top: -32px;
left: 0;
right: 0;
display: flex;
align-items: center;
gap: var(--spacing-xs);
padding: var(--spacing-xs) var(--spacing-sm);
background: rgba(239, 68, 68, 0.1);
border: 1px solid rgba(239, 68, 68, 0.3);
border-radius: var(--radius-sm);
font-size: 0.75rem;
color: #ef4444;
}
.transcript-icon {
animation: pulse 1s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.transcript-text {
flex: 1;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
/* Modal Styles */ /* Modal Styles */
.modal-backdrop { .modal-backdrop {
position: fixed; position: fixed;

View file

@ -1,6 +1,7 @@
<script lang="ts"> <script lang="ts">
import { onMount } from 'svelte'; import { onMount } from 'svelte';
import { invoke } from '@tauri-apps/api/core'; import { invoke } from '@tauri-apps/api/core';
import { activeKnowledgeHints } from '$lib/stores/app';
// Typen für Wissensbasis // Typen für Wissensbasis
interface KnowledgeEntry { interface KnowledgeEntry {
@ -242,6 +243,23 @@
{/each} {/each}
</div> </div>
<!-- Aktive Wissens-Hints (bei Tool-Aufrufen) -->
{#if $activeKnowledgeHints.length > 0}
<div class="active-hints">
<div class="hints-header">
<span class="hints-icon">💡</span>
<span class="hints-title">Aktive Hints</span>
<button class="btn-clear-hints" onclick={() => activeKnowledgeHints.set([])}>✕</button>
</div>
{#each $activeKnowledgeHints as hint}
<div class="hint-item">
<div class="hint-title">{categoryIcons[hint.category] || '📦'} {hint.title}</div>
<div class="hint-preview">{truncate(hint.content, 150)}</div>
</div>
{/each}
</div>
{/if}
<!-- Ergebnisliste --> <!-- Ergebnisliste -->
<div class="results-list"> <div class="results-list">
{#if results.length === 0} {#if results.length === 0}
@ -790,4 +808,79 @@
background: var(--bg-tertiary); background: var(--bg-tertiary);
color: var(--text-secondary); color: var(--text-secondary);
} }
/* Aktive Wissens-Hints */
.active-hints {
background: linear-gradient(135deg, rgba(250, 204, 21, 0.1) 0%, rgba(234, 179, 8, 0.05) 100%);
border: 1px solid rgba(250, 204, 21, 0.3);
border-radius: var(--radius-md);
padding: var(--spacing-sm);
margin-bottom: var(--spacing-md);
}
.hints-header {
display: flex;
align-items: center;
gap: var(--spacing-xs);
margin-bottom: var(--spacing-sm);
}
.hints-icon {
font-size: 1rem;
}
.hints-title {
flex: 1;
font-size: 0.75rem;
font-weight: 600;
color: var(--text-primary);
text-transform: uppercase;
letter-spacing: 0.5px;
}
.btn-clear-hints {
width: 20px;
height: 20px;
display: flex;
align-items: center;
justify-content: center;
background: transparent;
border-radius: var(--radius-sm);
font-size: 0.75rem;
color: var(--text-secondary);
opacity: 0.6;
}
.btn-clear-hints:hover {
opacity: 1;
background: rgba(0, 0, 0, 0.1);
}
.hint-item {
padding: var(--spacing-xs) var(--spacing-sm);
background: rgba(255, 255, 255, 0.5);
border-radius: var(--radius-sm);
margin-bottom: var(--spacing-xs);
}
.hint-item:last-child {
margin-bottom: 0;
}
.hint-title {
font-size: 0.8rem;
font-weight: 500;
color: var(--text-primary);
margin-bottom: 2px;
}
.hint-preview {
font-size: 0.7rem;
color: var(--text-secondary);
line-height: 1.4;
}
:global(.dark) .hint-item {
background: rgba(0, 0, 0, 0.2);
}
</style> </style>

View file

@ -1,7 +1,7 @@
<script lang="ts"> <script lang="ts">
import { onMount } from 'svelte'; import { onMount } from 'svelte';
import { invoke } from '@tauri-apps/api/core'; import { invoke } from '@tauri-apps/api/core';
import { currentModel } from '$lib/stores/app'; import { currentModel, agentMode, type AgentMode } from '$lib/stores/app';
interface ModelInfo { interface ModelInfo {
id: string; id: string;
@ -28,18 +28,68 @@
opus: { input: 15, output: 75 }, opus: { input: 15, output: 75 },
}; };
// Agent-Modi
interface AgentModeInfo {
id: AgentMode;
name: string;
icon: string;
description: string;
}
const agentModes: AgentModeInfo[] = [
{
id: 'solo',
name: 'Solo',
icon: '🎯',
description: 'Main Agent macht alles selbst. Schnell für einfache Aufgaben.'
},
{
id: 'handlanger',
name: 'Handlanger',
icon: '👷',
description: 'Main denkt, Sub-Agents führen exakt aus. Gut für koordinierte Aufgaben.'
},
{
id: 'experten',
name: 'Experten',
icon: '🧠',
description: 'Jeder Agent denkt selbst. Ideal für komplexe, parallelisierbare Aufgaben.'
},
{
id: 'auto',
name: 'Auto',
icon: '🔄',
description: 'Modus wird automatisch basierend auf Aufgaben-Komplexität gewählt.'
}
];
async function loadSettings() { async function loadSettings() {
try { try {
availableModels = await invoke('get_available_models'); availableModels = await invoke('get_available_models');
const current: string = await invoke('get_current_model'); const current: string = await invoke('get_current_model');
selectedModel = current; selectedModel = current;
$currentModel = current; $currentModel = current;
// Agent-Modus laden
const currentMode: string = await invoke('get_agent_mode');
$agentMode = currentMode as AgentMode;
} catch (err) { } catch (err) {
console.error('Fehler beim Laden:', err); console.error('Fehler beim Laden:', err);
} }
loading = false; loading = false;
} }
async function changeMode(modeId: AgentMode) {
if (modeId === $agentMode) return;
try {
await invoke('set_agent_mode', { mode: modeId });
$agentMode = modeId;
} catch (err) {
console.error('Fehler beim Modus-Wechsel:', err);
}
}
async function changeModel(modelId: string) { async function changeModel(modelId: string) {
if (modelId === selectedModel) return; if (modelId === selectedModel) return;
saving = true; saving = true;
@ -106,7 +156,44 @@
</div> </div>
</section> </section>
<!-- Weitere Einstellungen (Platzhalter) --> <!-- Agent-Modus -->
<section class="settings-section">
<h3>🤝 Agent-Modus</h3>
<p class="section-hint">Wie sollen komplexe Aufgaben bearbeitet werden?</p>
<div class="mode-list">
{#each agentModes as mode}
<button
class="mode-card"
class:selected={$agentMode === mode.id}
on:click={() => $agentMode = mode.id}
>
<div class="mode-header">
<span class="mode-icon">{mode.icon}</span>
<span class="mode-name">{mode.name}</span>
{#if $agentMode === mode.id}
<span class="mode-active">✓ Aktiv</span>
{/if}
</div>
<div class="mode-description">{mode.description}</div>
</button>
{/each}
</div>
<div class="mode-info">
{#if $agentMode === 'solo'}
<p>💡 <strong>Solo</strong> ist ideal für schnelle, einfache Aufgaben wie Typo-Fixes oder Code-Erklärungen.</p>
{:else if $agentMode === 'handlanger'}
<p>💡 <strong>Handlanger</strong> spart Context: Sub-Agents bekommen nur die nötigen Infos und liefern kompakte Zusammenfassungen.</p>
{:else if $agentMode === 'experten'}
<p>💡 <strong>Experten</strong> für komplexe Features: Research, Implement, Test und Review arbeiten parallel.</p>
{:else}
<p>💡 <strong>Auto</strong> analysiert die Aufgabe und wählt den passenden Modus automatisch.</p>
{/if}
</div>
</section>
<!-- Weitere Einstellungen -->
<section class="settings-section"> <section class="settings-section">
<h3>🎨 Darstellung</h3> <h3>🎨 Darstellung</h3>
<div class="setting-row"> <div class="setting-row">
@ -274,4 +361,80 @@
font-style: italic; font-style: italic;
opacity: 0.6; opacity: 0.6;
} }
/* Agent-Modus Karten */
.mode-list {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: var(--spacing-sm);
}
.mode-card {
text-align: left;
padding: var(--spacing-sm);
background: var(--bg-secondary);
border: 2px solid var(--bg-tertiary);
border-radius: var(--radius-md);
cursor: pointer;
transition: all 0.2s ease;
}
.mode-card:hover {
background: var(--bg-tertiary);
border-color: var(--text-secondary);
}
.mode-card.selected {
border-color: var(--accent);
background: rgba(233, 69, 96, 0.1);
}
.mode-header {
display: flex;
align-items: center;
gap: var(--spacing-xs);
margin-bottom: var(--spacing-xs);
}
.mode-icon {
font-size: 1rem;
}
.mode-name {
font-weight: 600;
font-size: 0.85rem;
flex: 1;
}
.mode-active {
font-size: 0.6rem;
padding: 1px 4px;
background: var(--accent);
color: white;
border-radius: var(--radius-sm);
}
.mode-description {
font-size: 0.7rem;
color: var(--text-secondary);
line-height: 1.3;
}
.mode-info {
margin-top: var(--spacing-md);
padding: var(--spacing-sm);
background: rgba(59, 130, 246, 0.1);
border: 1px solid rgba(59, 130, 246, 0.2);
border-radius: var(--radius-sm);
font-size: 0.75rem;
line-height: 1.4;
}
.mode-info p {
margin: 0;
}
.mode-info strong {
color: var(--accent);
}
</style> </style>

View file

@ -58,6 +58,10 @@ export const selectedAgentId = writable<string | null>(null);
export const currentModel = writable(''); export const currentModel = writable('');
export const currentSessionId = writable<string | null>(null); export const currentSessionId = writable<string | null>(null);
// Agent-Modus für Multi-Agent-Architektur
export type AgentMode = 'solo' | 'handlanger' | 'experten' | 'auto';
export const agentMode = writable<AgentMode>('solo');
// Session-Statistiken (kumuliert) // Session-Statistiken (kumuliert)
export const sessionStats = writable({ export const sessionStats = writable({
totalTokensIn: 0, totalTokensIn: 0,
@ -79,6 +83,18 @@ export interface StickyContextInfo {
export const stickyContextInfo = writable<StickyContextInfo | null>(null); export const stickyContextInfo = writable<StickyContextInfo | null>(null);
// Wissens-Hints (aus claude-db)
export interface KnowledgeHint {
id: number;
category: string;
title: string;
content: string;
tags?: string;
priority: number;
}
export const activeKnowledgeHints = writable<KnowledgeHint[]>([]);
// Abgeleitete Stores // Abgeleitete Stores
export const activeAgents = derived(agents, ($agents) => export const activeAgents = derived(agents, ($agents) =>
$agents.filter((a) => a.status === 'active') $agents.filter((a) => a.status === 'active')

View file

@ -22,9 +22,11 @@ import {
messageToDb, messageToDb,
addMonitorEvent, addMonitorEvent,
loadMonitorEventsFromDb, loadMonitorEventsFromDb,
activeKnowledgeHints,
type Message, type Message,
type Agent, type Agent,
type MonitorEventType type MonitorEventType,
type KnowledgeHint
} from './app'; } from './app';
// Event-Typen vom Backend // Event-Typen vom Backend
@ -192,7 +194,7 @@ export async function initEventListeners(): Promise<void> {
// Tool Start // Tool Start
listeners.push( listeners.push(
await listen<ToolEvent>('tool-start', (event) => { await listen<ToolEvent>('tool-start', async (event) => {
const { tool, input } = event.payload; const { tool, input } = event.payload;
console.log('🔧 Tool Start:', tool); console.log('🔧 Tool Start:', tool);
@ -203,6 +205,32 @@ export async function initEventListeners(): Promise<void> {
} }
return ags; return ags;
}); });
// Wissens-Hints aus claude-db laden
try {
// Command aus Input extrahieren (je nach Tool)
let command: string | undefined;
if (input && typeof input === 'object') {
// Bash: command, Read/Write/Edit: file_path
command = (input as Record<string, unknown>).command as string
|| (input as Record<string, unknown>).file_path as string
|| undefined;
}
const hints = await invoke<KnowledgeHint[]>('get_tool_hints', {
tool: tool || 'unknown',
command,
context: undefined
});
if (hints && hints.length > 0) {
activeKnowledgeHints.set(hints);
console.log('💡 Wissens-Hints geladen:', hints.map(h => h.title));
}
} catch (err) {
// Fehler beim Laden ignorieren — Hints sind optional
console.debug('Wissens-Hints nicht verfügbar:', err);
}
}) })
); );