AWS Certified AI Practitioner
The AWS Certified AI Practitioner certification demonstrates your proficiency in essential artificial intelligence (AI), machine learning (ML), and generative AI concepts and applications. The AWS Certified AI Practitioner (AIF-C01) exam is designed for individuals who can effectively showcase their comprehensive understanding of AI/ML, generative AI technologies, and related AWS services and tools, regardless of their specific job role. Further. the exam assesses a candidate’s ability to:
- Grasp the fundamental concepts, methods, and strategies of AI, ML, and generative AI, particularly in the context of AWS.
- Appropriately utilize AI/ML and generative AI technologies to formulate relevant questions within their organization.
- Identify the suitable types of AI/ML technologies to address specific use cases.
- Utilize AI, ML, and generative AI technologies responsibly.
Target Audience
The ideal candidate should have up to six months of experience with AI/ML technologies on AWS. While they may use AI/ML solutions on AWS, they are not required to have built these solutions. Roles include:
- Business analyst
- IT support
- Marketing Professional
- Product or project manager
- Line-of-business or IT manager
- Sales professional
Recommended AWS Knowledge
The candidate should have the following AWS knowledge:
- Understanding of core AWS services (such as Amazon EC2, Amazon S3, AWS Lambda, and Amazon SageMaker) and their respective use cases.
- Awareness of the AWS shared responsibility model for security and compliance within the AWS Cloud.
- Familiarity with AWS Identity and Access Management (IAM) for securing and managing access to AWS resources.
- Knowledge of the AWS global infrastructure, including concepts related to AWS Regions, Availability Zones, and edge locations.
- Understanding of AWS service pricing models.
Exam Details
The AWS Certified AI Practitioner exam, categorized as foundational, lasts 120 minutes and consists of 85 questions. Candidates can choose to take the exam at a Pearson VUE testing center or opt for an online proctored format, with availability in English and Japanese. The minimum passing score for the exam is 700 (scaled score of 100–1,000).
Question Types
The exam includes one or more of the following types of questions:
- Multiple Choice: Contains one correct answer and three incorrect options (distractors).
- Multiple Response: Features two or more correct answers among five or more options. To earn credit, you must select all correct responses.
- Ordering: Provides a list of 3–5 responses that need to be arranged to complete a specific task. You must select the correct responses and arrange them in the proper order to receive credit.
- Matching: Involves a list of responses that must be matched with 3–7 prompts. You must correctly pair all options to earn credit.
- Case Study: Consists of a scenario followed by two or more questions related to it. The scenario remains the same for each question within the case study, and each question will be graded separately, allowing you to receive credit for each correctly answered question.
Course Outline
This exam guide outlines the weightings, content domains, and task statements associated with the exam. It provides additional context for each task statement to assist you in your preparation. The topics are:
Domain 1: Fundamentals of AI and ML
Task Statement 1.1: Explain basic AI concepts and terminologies.
Objectives:
- Define basic AI terms (for example, AI, ML, deep learning, neural networks, computer vision, natural language processing [NLP], model, algorithm, training and inferencing, bias, fairness, fit, large language model [LLM]).
- Describe the similarities and differences between AI, ML, and deep learning.
- Describe various types of inferencing (for example, batch, real-time).
- Describe the different types of data in AI models (for example, labeled and unlabeled, tabular, time-series, image, text, structured and unstructured).
- Describe supervised learning, unsupervised learning, and reinforcement learning.
Task Statement 1.2: Identify practical use cases for AI.
Objectives:
- Recognize applications where AI/ML can provide value (for example, assist human decision making, solution scalability, automation).
- Determine when AI/ML solutions are not appropriate (for example, costbenefit analyses, situations when a specific outcome is needed instead of a prediction).
- Select the appropriate ML techniques for specific use cases (for example, regression, classification, clustering).
- Identify examples of real-world AI applications (for example, computer vision, NLP, speech recognition, recommendation systems, fraud detection, forecasting).
- Explain the capabilities of AWS managed AI/ML services (for example, SageMaker, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Lex, Amazon Polly).
Task Statement 1.3: Describe the ML development lifecycle.
Objectives:
- Describe components of an ML pipeline (for example, data collection, exploratory data analysis [EDA], data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring).
- Understand sources of ML models (for example, open source pre-trained models, training custom models).
- Describe methods to use a model in production (for example, managed API service, self-hosted API).
- Identify relevant AWS services and features for each stage of an ML pipeline (for example, SageMaker, Amazon SageMaker Data Wrangler, Amazon SageMaker Feature Store, Amazon SageMaker Model Monitor).
- Understand fundamental concepts of ML operations (MLOps) (for example, experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training).
- Understand model performance metrics (for example, accuracy, Area Under the ROC Curve [AUC], F1 score) and business metrics (for example, cost per user, development costs, customer feedback, return on investment [ROI]) to evaluate ML models.
Domain 2: Fundamentals of Generative AI
Task Statement 2.1: Explain the basic concepts of generative AI.
Objectives:
- Understand foundational generative AI concepts (for example, tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models, multi-modal models, diffusion models).
- Identify potential use cases for generative AI models (for example, image, video, and audio generation; summarization; chatbots; translation; code generation; customer service agents; search; recommendation engines).
- Describe the foundation model lifecycle (for example, data selection, model selection, pre-training, fine-tuning, evaluation, deployment, feedback).
Task Statement 2.2: Understand the capabilities and limitations of generative AI for solving business problems.
Objectives:
- Describe the advantages of generative AI (for example, adaptability, responsiveness, simplicity).
- Identify disadvantages of generative AI solutions (for example, hallucinations, interpretability, inaccuracy, nondeterminism).
- Understand various factors to select appropriate generative AI models (for example, model types, performance requirements, capabilities, constraints, compliance).
- Determine business value and metrics for generative AI applications (for example, cross-domain performance, efficiency, conversion rate, average revenue per user, accuracy, customer lifetime value).
Task Statement 2.3: Describe AWS infrastructure and technologies for building generative AI applications.
Objectives:
- Identify AWS services and features to develop generative AI applications (for example, Amazon SageMaker JumpStart; Amazon Bedrock; PartyRock, an Amazon Bedrock Playground; Amazon Q).
- Describe the advantages of using AWS generative AI services to build applications (for example, accessibility, lower barrier to entry, efficiency, cost-effectiveness, speed to market, ability to meet business objectives).
- Understand the benefits of AWS infrastructure for generative AI applications (for example, security, compliance, responsibility, safety).
- Understand cost tradeoffs of AWS generative AI services (for example, responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models).
Domain 3: Applications of Foundation Models
Task Statement 3.1: Describe design considerations for applications that use foundation models.
Objectives:
- Identify selection criteria to choose pre-trained models (for example, cost, modality, latency, multi-lingual, model size, model complexity, customization, input/output length).
- Understand the effect of inference parameters on model responses (for example, temperature, input/output length).
- Define Retrieval Augmented Generation (RAG) and describe its business applications (for example, Amazon Bedrock, knowledge base).
- Identify AWS services that help store embeddings within vector databases (for example, Amazon OpenSearch Service, Amazon Aurora, Amazon Neptune, Amazon DocumentDB [with MongoDB compatibility], Amazon RDS for PostgreSQL).
- Explain the cost tradeoffs of various approaches to foundation model customization (for example, pre-training, fine-tuning, in-context learning, RAG).
- Understand the role of agents in multi-step tasks (for example, Agents for Amazon Bedrock).
Task Statement 3.2: Choose effective prompt engineering techniques.
Objectives:
- Describe the concepts and constructs of prompt engineering (for example, context, instruction, negative prompts, model latent space).
- Understand techniques for prompt engineering (for example, chain-ofthought, zero-shot, single-shot, few-shot, prompt templates).
- Understand the benefits and best practices for prompt engineering (for example, response quality improvement, experimentation, guardrails, discovery, specificity and concision, using multiple comments).
- Define potential risks and limitations of prompt engineering (for example, exposure, poisoning, hijacking, jailbreaking).
Task Statement 3.3: Describe the training and fine-tuning process for foundation models.
Objectives:
- Describe the key elements of training a foundation model (for example, pre-training, fine-tuning, continuous pre-training).
- Define methods for fine-tuning a foundation model (for example, instruction tuning, adapting models for specific domains, transfer learning, continuous pre-training).
- Describe how to prepare data to fine-tune a foundation model (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF]).
Task Statement 3.4: Describe methods to evaluate foundation model performance.
Objectives:
- Understand approaches to evaluate foundation model performance (for example, human evaluation, benchmark datasets).
- Identify relevant metrics to assess foundation model performance (for example, Recall-Oriented Understudy for Gisting Evaluation [ROUGE], Bilingual Evaluation Understudy [BLEU], BERTScore).
- Determine whether a foundation model effectively meets business objectives (for example, productivity, user engagement, task engineering).
Domain 4: Guidelines for Responsible AI
Task Statement 4.1: Explain the development of AI systems that are responsible.
Objectives:
- Identify features of responsible AI (for example, bias, fairness, inclusivity, robustness, safety, veracity).
- Understand how to use tools to identify features of responsible AI (for example, Guardrails for Amazon Bedrock).
- Understand responsible practices to select a model (for example, environmental considerations, sustainability).
- Identify legal risks of working with generative AI (for example, intellectual property infringement claims, biased model outputs, loss of customer trust, end user risk, hallucinations).
- Identify characteristics of datasets (for example, inclusivity, diversity, curated data sources, balanced datasets).
- Understand effects of bias and variance (for example, effects on demographic groups, inaccuracy, overfitting, underfitting).
- Describe tools to detect and monitor bias, trustworthiness, and truthfulness (for example, analyzing label quality, human audits, subgroup analysis, Amazon SageMaker Clarify, SageMaker Model Monitor, Amazon Augmented AI [Amazon A2I]).
Task Statement 4.2: Recognize the importance of transparent and explainable models.
Objectives:
- Understand the differences between models that are transparent and explainable and models that are not transparent and explainable.
- Understand the tools to identify transparent and explainable models (for example, Amazon SageMaker Model Cards, open source models, data, licensing).
- Identify tradeoffs between model safety and transparency (for example, measure interpretability and performance).
- Understand principles of human-centered design for explainable AI.
Domain 5: Security, Compliance, and Governance for AI Solutions
Task Statement 5.1: Explain methods to secure AI systems.
Objectives:
- Identify AWS services and features to secure AI systems (for example, IAM roles, policies, and permissions; encryption; Amazon Macie; AWS PrivateLink; AWS shared responsibility model).
- Understand the concept of source citation and documenting data origins (for example, data lineage, data cataloging, SageMaker Model Cards).
- Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity).
- Understand security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management,
infrastructure protection, prompt injection, encryption at rest and in transit).
Task Statement 5.2: Recognize governance and compliance regulations for AI systems.
Objectives:
- Identify regulatory compliance standards for AI systems (for example, International Organization for Standardization [ISO], System and Organization Controls [SOC], algorithm accountability laws).
- Identify AWS services and features to assist with governance and regulation compliance (for example, AWS Config, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor).
- Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention).
- Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements).
AWS Certified AI Practitioner: FAQs
AWS Exam Policy
Amazon Web Services (AWS) establishes clear rules and procedures for their certification exams. These guidelines address multiple facets of exam preparation and certification. Some of the key policies include:
Retake Policy
If you do not pass an exam, you must wait 14 calendar days before you can retake it. There is no limit on the number of attempts, but you will need to pay the full registration fee for each try. After passing an exam, you cannot retake the same exam for two years. However, if the exam has been updated with a new exam guide and exam series code, you will be eligible to take the updated version.
Exam Results
The AWS Certified AI Practitioner (AIF-C01) exam is evaluated with a pass or fail designation. Scoring is based on a minimum standard set by AWS professionals adhering to certification industry best practices and guidelines. Your exam results are presented as a scaled score ranging from 100 to 1,000, with a minimum passing score of 700. This score reflects your overall performance on the exam and indicates whether you passed. Scaled scoring models ensure that scores are comparable across different exam forms that may vary slightly in difficulty.
AWS Certified AI Practitioner Exam Study Guide
1. Understand the Exam Guide
Utilizing the AWS Certified AI Practitioner exam guide is essential for effective exam preparation. This guide provides a comprehensive overview of the exam structure, including the weightings of different content domains and specific task statements. By reviewing these sections, candidates can identify key areas of focus and allocate their study time accordingly. Additionally, the guide offers insights into the types of questions that may appear on the exam, helping candidates familiarize themselves with the format and improve their test-taking strategies. Leveraging this resource can significantly enhance your understanding of AI and machine learning concepts as they relate to AWS, ultimately boosting your confidence and readiness for the certification exam.
2. Use AWS Training Live on Twitch
Experience free, live, and on-demand training through our dedicated Twitch channel. Engage with AWS experts during live broadcasts where they cover a variety of topics related to AWS services and solutions. These interactive sessions provide a unique opportunity to ask questions in real-time and gain insights from industry professionals. In addition to the live shows, you can connect with a vibrant community of learners and AWS enthusiasts, sharing knowledge and experiences. For those who may have missed a live session, our channel also offers a selection of on-demand training resources that you can access at your convenience.
3. EXAM PREP- AWS Certified AI Practitioner (AIF-C01)
Receive comprehensive guidance from the beginning of your journey to becoming an AWS Certified AI Practitioner. Maximize your study time with AWS Skill Builder’s four-step exam preparation process, designed for seamless learning whenever and wherever you need it. This exam certifies your knowledge of in-demand concepts and applications in artificial intelligence (AI), machine learning (ML), and generative AI.
4. Join Study Groups
Participating in study groups provides a dynamic and collaborative approach to preparing for the AWS Certified AI Practitioner exam. By joining these groups, you connect with a community of individuals who are also navigating the complexities of AWS certifications. Engaging in discussions, sharing experiences, and tackling challenges together can offer valuable insights and deepen your understanding of essential concepts. Study groups offers a supportive atmosphere where members can clarify doubts, exchange tips, and maintain motivation throughout their certification journey. This collaborative learning experience not only enhances your grasp of AWS technologies but also builds a sense of camaraderie among peers who share similar goals.
5. Use Practice Tests
Using practice tests for the AWS Certified AI Practitioner exam in your study strategy is crucial for exam success. These practice tests simulate the actual exam environment, enabling you to evaluate your knowledge, pinpoint areas for improvement, and become familiar with the types of questions you might encounter. Regularly taking practice tests helps build confidence, enhances your time-management skills, and ensures you are well-prepared for the specific challenges associated with AWS certification exams. By combining the benefits of study groups with practice tests, you create a comprehensive and effective approach to mastering AWS technologies and achieving your certification.