Part 1: Reflection Questions
The traditional AI systems have more of a calculator approach in which they give a specific output to a predefined prompt whereas in the case of agentic AI, we can refer to them as intelligent agents that interact with various APIs to get a tailored solution which can be revisited and modified time and again as per the user's preference. Agents can Observe, Orient, Decide and Act as per the requirement encountered at every stage of the problem. Unlike the traditional AI framework, AI agents can fetch data, adapt to new information and carry out tasks on your behalf as well.
Think about your organization or industry. What specific tasks could benefit from AI agents?
What would an implementation look like for one of these tasks?
In the clinical research and healthcare industry, Agentic AI could play a transformative role in performing repetitive yet crucial tasks with precision and accuracy. It can lay the foundation of informed decision making. An example to support this would be, the recruitment process during clinical trials, as it requires matching the patients with very specific eligibility criteria. An AI agent could continuously scan the electronic health records , clinical notes and genetic data to identify the patients in real time, then automatically reach out to physicians and update their trial dashboards
Implementation would involve connecting the agent to hospital databases through secure APIs, integrating it with regulatory compliance checks (HIPAA, GDPR), and equipping it with natural language processing to interpret unstructured data in clinical reports. The workflow could be designed so the agent acts as a first-level screener, presenting a shortlist of candidates to clinicians, thereby saving time and ensuring faster trial enrollment
The virtual assistant example we discussed in the lecture employs an orchestrator–worker multi-agent architecture. In this setup, the orchestrator agent acts like a manager that understands the user’s request and decides which worker agent is best suited to handle the task—whether it’s scheduling a meeting, retrieving information, or sending a reminder. Each worker agent is specialized in one function, so the workload is divided and handled more efficiently. This approach is effective because it avoids overloading a single AI model with every responsibility, improves accuracy by letting each agent focus on its expertise, and makes the system easier to scale and update. In the context of a virtual assistant, the orchestrator–worker pattern ensures that the assistant can smoothly combine different services and provide a seamless, human-like experience for the user.
Perception in AI agents refers to the ability to sense and interpret data from the environment. A real-world example is seen in autonomous vehicles. These cars use cameras, LIDAR, radar, and other sensors to “perceive” their surroundings—identifying pedestrians, traffic lights, road signs, and obstacles. This perception allows the AI agent inside the car to build an accurate picture of the external environment in real time. Without perception, the car would not be able to understand context or make safe decisions. The strength of this component lies in converting raw input (like images or signals) into meaningful insights that the reasoning module can use to decide the next action, such as braking, changing lanes, or accelerating.
Part 2: Case Evaluation - When to Use Agentic Systems
For each of the following scenarios, evaluate whether an agentic AI system would be appropriate.
Explain your reasoning by discussing the benefits and drawbacks of using an agent versus a
non-agentic solution.
An agentic AI system is not necessary for simple data lookup tasks, as the operation involves directly retrieving structured information from a database. A non-agentic solution such as a traditional query engine or search function would be faster, cheaper, and less complex. Using an agent in this context would add unnecessary overhead without providing significant benefits.
For straightforward mathematical operations, an agentic AI system would be excessive. A simple non-agentic solution, like a calculator application or a rule-based function, performs these operations with high accuracy and speed. The additional reasoning or autonomy of an agent provides no real value here and would only complicate the process.
This is a strong case for using an agentic AI system. Travel planning requires coordinating multiple services like flights, accommodations, and activities while considering budget, user preferences, and dynamic availability. An agent-based approach can reason across multiple constraints, interact with various services, and adjust proactively as conditions change, something a static system would struggle to handle.
An agentic system is highly suitable for research synthesis, as the task involves searching diverse sources, evaluating credibility, and integrating findings into a cohesive output. A non-agentic system could retrieve documents but would not have the capability to reason, filter, and synthesize effectively. Agents enable automation of critical thinking steps, although challenges include ensuring reliability and avoiding bias.
A smart home environment benefits greatly from agentic systems. The system must not only react to commands but also anticipate user needs, balance device coordination, and adapt to patterns. An agent can manage these dynamic tasks through reasoning and learning, whereas a non-agentic system would remain purely reactive and require constant manual input.
Email management is another area where agentic systems are advantageous. While simple rule-based filters can handle sorting, drafting contextually appropriate responses and prioritizing important messages require reasoning and learning. An agentic solution can reduce manual effort, though drawbacks include the risk of misclassifying important messages or generating inappropriate responses.
This is a critical case where an agentic system can be valuable but must be applied cautiously. Agents can interpret described symptoms, reason through possible causes, and provide tailored health information. However, due to the sensitivity of medical decisions, such systems must be heavily regulated, transparent, and ideally used as supportive tools rather than replacements for human professionals.
An agentic AI system is appropriate here, as investment advising requires analyzing real-time market conditions, aligning with user goals, and adapting to changing risks. Agents can personalize recommendations and react dynamically to market trends. The drawbacks lie in accountability and ethical concerns, as users may over-rely on automated advice without fully understanding risks.
Part 3: Repository Analysis
Agentic AI systems are defined by their ability to perceive, reason, act, and learn/feedback autonomously in pursuit of goals—often through tools, memory, or planning.
The rt-repo-assessment repository, however, is a structured assessment framework that:
Uses LangChain and rule-based logic to evaluate code quality, documentation, repo structure, dependencies, and licensing.
Relies on predefined scoring via YAML configurations and LLM-based evaluations.
Automatically generates reports and scores based on static inputs.
No—this solution does not qualify as an agentic AI system. It is primarily a tool for assessment, not autonomous reasoning or decision-making.
Since the code just
1.Acts as a classification or grading tool, not a planner.
2.Executes analysis based on a set of fixed criteria (Essential, Professional, Elite).
3.Uses LLMs to interpret content, but only in a structured, evaluative context.
The rt-repo-assessment project is not agentic, as it lacks the autonomous, goal-driven behavior, iterative reasoning, and adaptive capabilities that define agentic AI systems. Instead, it’s a well-designed, rule-based assessment framework enhanced with LLM support—a powerful tool in its scope, but not an AI agent.
Part 4: Future of Work Analysis
While agentic AI is likely to transform the future of work significantly, it is unlikely to completely eliminate roles such as Software Engineer, Data Analyst, Data Scientist, ML Engineer, AI Engineer, Data Engineer, or ML-Ops Engineer in the next ten years. Instead, these roles will evolve, with a shift in focus from manual execution of tasks to higher-order problem-solving, oversight, and integration of AI-driven workflows. For example, Software Engineers will still be needed to design complex systems, ensure code quality, and integrate AI agents into enterprise solutions, even if routine coding tasks are increasingly automated. Data Analysts may see a reduction in manual reporting but will remain vital in interpreting agent outputs, contextualizing insights, and aligning findings with business goals. Similarly, Data Scientists and ML Engineers will continue to play a critical role in designing experiments, validating models, and ensuring fairness, explainability, and trustworthiness in AI systems. AI Engineers and Data Engineers will remain essential for building the infrastructure, pipelines, and governance frameworks that allow agentic systems to operate reliably. Finally, ML-Ops Engineers will become even more important as AI moves toward autonomy, since monitoring, deploying, and updating agentic systems will require rigorous oversight. Thus, while agentic AI may reduce the need for repetitive, low-value tasks, it will not eliminate these jobs; rather, it will augment them, demanding a shift in skills toward creativity, strategy, and ethical oversight.
Part 5: Choosing the Right Architecture for an Automated
Peer Review System
I’d pick the Agentic (orchestrator–worker) architecture for this task because it cleanly decomposes work into specialized capabilities (planning, summarizing, novelty-checking, and verification) and lets the system call external tools (GitHub, web) and adapt its plan when new information arrives—exactly the sort of flexible, tool-using behavior Anthropic recommends for effective agents. This design is stronger when the job needs reasoning across messy inputs, dynamic verification, or exploratory checks (novelty/factuality) that a linear workflow can’t easily handle. The tradeoffs are higher complexity, compute/cost, and potential non-determinism, so you must add strict guardrails (constrained prompts, deterministic verification steps, audit logs) to preserve reliability and transparency. A good compromise is a hybrid: keep deterministic workflow stages for extraction and template generation (low cost, transparent) and invoke the orchestrator–worker agents only for planning, novelty detection, and external verification—this balances reliability, cost, and flexibility.