Part 1: Reflection Questions
Answer the following questions based on the concepts covered in the lecture:
Your Answer Here: The easiest way to explain the difference between traditional AI and AI agents is by comparing them to two types of workers: a strict, rule-following assistant versus a proactive, resourceful employee.
Your Answer Here: Education, an AI Agent that aids in preparing a lesson plan, and certain questions from that lesson plan, and generates learning notes
Your Answer Here: agentic AI architecture. This agentic architecture is highly effective for virtual assistants in customer service because it allows for autonomous, multi-step problem-solving that adapts to the unpredictable nature of conversations
Your Answer Here: I will choose the Action component and explain how it manifests in a self-driving car.
The Action component in a self-driving car. The Action component of an AI agent is responsible for executing the physical or digital tasks that the agent's reasoning module has decided upon. For a self-driving car, this translates the AI's internal decisions—such as "accelerate," "brake," or "steer"—into real-world, physical movements
Part 2: Case Evaluation - When to Use Agentic Systems
For each of the following scenarios, evaluate whether an agentic AI system would be appropriate.
Explain your reasoning by discussing the benefits and drawbacks of using an agent versus a
non-agentic solution.
Your Answer Here: Yes, an AI system will be appropriate for the reason
Your Answer Here: Use a non-agentic solution.
Your Answer Here: An agentic solution is the right fit.
Research Synthesis: A system that searches across multiple sources, extracts relevant
information, evaluates credibility, and compiles findings into a cohesive report
Your Answer Here:
Smart Home Coordinator: A system that manages multiple connected devices, anticipates
user needs based on patterns, and proactively adjusts settings accordingly
Your Answer Here: Agentic system is best.
Your Answer Here: Hybrid approach—non-agentic for sorting, agentic for drafting responses (with human-in-the-loop)
Your Answer Here: Limited agentic use is possible, but human oversight is critical.
Your Answer Here: Agentic system is powerful here, but must include safeguards, explainability, and regulatory compliance
Part 4: Future of Work Analysis
Based on what you have understood about agentic AI in Week 1, do you think the following job titles
will be eliminated in the next 10 years due to agentic AI? Explain your reasoning for each:
● Software Engineer
● Data Analyst
● Data Scientist
● ML Engineer
● AI Engineer
● Data Engineer
● ML-Ops Engineer
Your Answer Here:
Software Engineer — Likelihood of elimination: Low
Why: Code generation and automation will handle boilerplate, tests, simple bug fixes, and scaffolding. But designing complex architectures, making tradeoffs (scalability, latency, security), integrating with legacy systems, and cross-team coordination require human judgment.
Tasks likely automated: repetitive coding, unit test scaffolding, basic refactors, simple PR reviews.
Evolving focus: system design, API/contract design, reliability engineering, supervising/validating agentic code generators, security, and developer experience.
Suggested upskill: architecture, testing/observability, security, prompt/agent orchestration, communication.
Data Analyst — Likelihood of elimination: Medium
Why: Many analyst tasks (ETL for small datasets, dashboard generation, narrative summaries) are already heavily automatable with agentic pipelines and natural-language reporting. However turning data into actionable strategic insight, validating data quality, and interpreting results for stakeholders remains valuable.
Tasks likely automated: routine reporting, exploratory queries, first-pass visualizations, auto-generated write-ups.
Evolving focus: data storytelling, hypothesis design, validating automated outputs, domain-context interpretation, and setting up/maintaining trustworthy analytics pipelines.
Suggested upskill: product/business domain knowledge, analytics instrumentation, visualization best practices, causal thinking.
Data Scientist — Likelihood of elimination: Low–Medium
Why: AutoML and agents will take over common modeling pipelines, hyperparameter tuning, and baseline models. But problem formulation, causal inference, custom modeling, experimental design, and interpreting models in context are hard to fully automate. High-stakes/novel problems especially need human expertise.
Tasks likely automated: baseline model building, feature selection heuristics, standard evaluation pipelines.
Evolving focus: causal analysis, experimental design, fairness/robustness, bespoke models, explaining models to stakeholders, and collaborating with ML engineers/MLOps.
Suggested upskill: causal inference, interpretability, domain specialization, research literacy.