LLMAgent: Autonomous Task Scheduling
Tian AI's LLMAgent module implements a full autonomous agent system capable of planning, executing, and reflecting on complex multi-step tasks -- all powered by a local LLM with no cloud dependency.
Core Architecture
- Task Planner: Breaks down goals into sub-tasks
- Dependency Sort: Topological ordering of tasks
- Safety Whitelist: Prevents dangerous operations
- Self-Reflection: Post-task analysis and improvement
Task Planning with LLM
The planner uses structured prompting to decompose goals:
def plan_tasks(goal):
prompt = f"Goal: {goal}. Break this into 3-5 sequential steps. For each step, provide: 1. Action type, 2. Parameters, 3. Expected output. Output as JSON list."
response = llm.generate(prompt, temperature=0.2)
return parse_task_list(response)
The low temperature (0.2) ensures consistent, deterministic planning.
Safety Whitelist System
Security is paramount for autonomous agents. Every command is checked against a whitelist:
SAFE_COMMANDS = {
'read': ['cat', 'head', 'tail', 'less', 'grep', 'find'],
'write': ['echo', 'printf', 'tee'],
'process': ['ps', 'top'],
'network': ['curl', 'wget', 'ping'],
}
BLOCKED_COMMANDS = ['rm -rf', 'dd', 'mkfs', 'chmod 777', 'sudo', 'su']
Commands not in the whitelist trigger a user confirmation dialog, ensuring the agent never executes destructive operations autonomously.
Dependency Sorting
Tasks with interdependencies are sorted using topological ordering:
def topological_sort(tasks):
graph = {t['id']: t.get('depends_on', []) for t in tasks}
visited = set()
sorted_tasks = []
def dfs(node):
if node in visited:
return
visited.add(node)
for dep in graph[node]:
dfs(dep)
sorted_tasks.append(node)
for task in tasks:
dfs(task['id'])
return [t for t in tasks if t['id'] in sorted_tasks]
Self-Reflection Loop
After each task, the agent reflects on success/failure and adjusts. On failure detection, the agent retries with adjusted parameters or asks for user guidance.
Greeting Shortcut Path
For efficiency, common interaction patterns bypass the full planning pipeline. Greetings, confirmations, and simple Q&A directly route to Fast Mode, reducing latency from ~3s to ~0.5s for routine interactions.