Hello to the developer community in Central Asia!

My name is Askar Aituov, it took me 3 minutes to create Tengri.bot, an AI Agent built using the Agent Development Kit (ADK). Like many of you, I have been transitioning from traditional software development to this exciting new world of Generative AI Agents.

If you are just starting, the new vocabulary can be overwhelming. Agents are a shift from code that simply predicts to applications that can plan and act on their own. To help you navigate this transition, I have summarized the most important concepts and terms from the latest whitepapers on Agentic Systems.The Core Agent Architecture: Three Essential Components

Every AI Agent, from a simple script to a complex system, is built on three main parts, which you can think of as a human body:

Term (English) Simple Translation / Analogy What It Does (The Job)
Model The Brain (LM) This is the Large Language Model (LLM) or Foundation Model (e.g., Gemini). It is the central part that reasons, thinks, and makes decisions about the next step.
Tools The Hands These are API calls, code functions, or databases (like a search engine or a calendar app). They allow the agent to retrieve real-time information and take actions in the real world.
Orchestration Layer The Nervous System This is the governing process—the code you write (often with a framework like ADK). It manages the entire process: planning the steps, deciding when to use a tool, and providing the agent with memory (context).

The Agent’s Operating Cycle: Think, Act, Observe

Instead of executing a fixed sequence of code, an Agent runs in a continuous loop to solve a problem. This is the Agentic Problem-Solving Process:

  1. Get the Mission: The user gives the agent a high-level goal (e.g., “Find the best flight to Almaty for next week”).
  2. Scan the Scene: The agent gathers all the available information (user input, internal memory, available tools).
  3. Think It Through: The Model plans the strategy (e.g., “First, I need to use the flight_search tool. Then, I need to use the pricing_api.”).
  4. Take Action: The Orchestration Layer executes the first step of the plan by calling a Tool.
  5. Observe and Iterate: The agent sees the result of the action (e.g., the flight search results). This new information becomes part of the context, and the agent goes back to Step 3 to plan the next step. This loop continues until the mission is complete.

A Taxonomy of Agents: Scaling Your Ambition

You don’t need to build a super-agent on day one. Agents can be classified by their complexity:

Level Name Key Capability Example
Level 1 Connected Problem-Solver Uses Tools to access real-time data. An agent that searches the live web for the latest stock price.
Level 2 Strategic Problem-Solver Plans complex, multi-step tasks and manages context strategically. An agent that finds the halfway point between two addresses, then searches for coffee shops with a 4-star rating in that specific area.
Level 3 Collaborative Multi-Agent System A “team of specialists” where agents delegate tasks to other agents (who are treated as tools). A Project Manager Agent delegates research to a Market Research Agent and coding to a WebDev Agent.
Level 4 Self-Evolving System Can autonomously identify a missing capability and create a new tool or agent to solve the problem. The system decides it needs a better social media monitoring tool, and then it builds that agent itself.

Essential Developer Practices: Agent Ops

When developing with Agents, you are moving from writing deterministic code (where the output is always the same) to stochastic code (where the output is probabilistic). This requires a new way of working called Agent Ops (Agent Operations):

  • LM as Judge: Since output == expected doesn’t work anymore, you use a second, powerful LM to evaluate the first agent’s response against a rubric (Did it follow the tone? Was it factually correct?). This is how you measure quality.
  • OpenTelemetry Traces: When an agent fails, you cannot set a simple breakpoint. Traces are your best friend. They provide a step-by-step recording of the agent’s entire “thought process” (the prompt, the model’s reasoning, the tool it called, and the tool’s result). This is essential for debugging.
  • Human Feedback: Every bug report or “thumbs down” from a user is a valuable edge case that your automated tests missed. The Agent Ops loop closes by taking this feedback and turning it into a new, permanent test case in your evaluation dataset.

I encourage you to explore these concepts further as you build your own agents. The journey from traditional code to autonomous problem-solving is challenging, but with frameworks like ADK, we can build the next generation of intelligent applications right here in Central Asia.

Here is a 3 minute video of me creating Tengri.bot via ADK.

Happy building!

Sources:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.