Skip to main content

SecAI+ Practice Test (V1)

Dive into practice questions

Question 1

After implementing an AI loan agent, a financial organization provides unfair loan approvals that target a specific group. Which of the following is the source of this risk?

A. Data leakage

B. Introduction of bias

C. Use of open-source libraries

D. Autonomous systems

Question 2

Employees report that an AI writing assistant for an email client is generating responses that appear to contain text found in other employees' email messages. Which of the following will most likely address the issue?

A. Providing access to additional models

B. Removing access from the email client

C. Migrating the assistant to another server

D. Configuring permissions that restrict data access

Question 3

A disgruntled employee changed the company policies that a chatbot references in order to create confusion and disrupt the business. Which of the following AI-generated content vulnerabilities is the employee exploiting?

A. Data reduction

B. Data masking

C. Data poisoning

D. Data leaking

Question 4

A financial organization implements a new AI-based fraud detection system to flag suspicious transactions. A security analyst discovers that it occasionally blocks legitimate transactions. Which of the following is the best recommendation?

A. Retraining the model with more data and recent transaction patterns

B. Implementing AI token usage and rate limits

C. Encrypting all the data processed by AI and applying further access controls

D. Rolling back the model and using a traditional fraud detection system

Question 5

A company implements an AI tool to train its staff members on interview procedures. The staff members report that the chatbot exposes the interviewer and interviewee names that were contained in the AI training data sets. Which of the following should have been implemented to prevent this type of error?

A. Salting

B. Hashing

C. Minimization

D. Anonymization

Question 6

Which of the following is the best example of reinforcement learning strengthening an organization's cybersecurity defensive capabilities?

A. Using AI to cluster network events based on unlabeled behavioral patterns

B. Using AI to classify malware samples based on labeled features

C. Using AI to optimize intrusion detection thresholds based on reward feedback

D. Using AI to summarize phishing alerts based on natural language processing models

Question 7

The performance of an implemented AI solution drops due to data/model drift. Which of the following is the best way to manage the expected performance?

A. Machine learning operations (MLOps)

B. Security information and event management (SIEM)

C. Retraining

D. Fine-tuning

Question 8

A financial organization is using a new AI product for fraud detection activities. The AI security team has concerns about potential attacks conducted by a known cyber group targeting financial AI models. Which of the following is the most appropriate response to harden the model?

A. Signature matching

B. Differential privacy

C. Adversarial training

D. Federated clustering

Question 9

Which of the following is needed for compliance in a small startup organization that develops AI applications?

A. Network security architecture

B. Data security and privacy

C. Neural network design

D. Data science

Question 10

A company adopts AI in its intrusion detection system (IDS). The security team raises concerns about the AI model's decision-making process. Which of the following principles best addresses these concerns?

A. Inclusiveness

B. Transparency

C. Consistency

D. Explainability

Answer key

Question 1:(Introduction of bias)

Question 2: D (Configuring permissions that restrict data access)
Question 3: C (Data poisoning)
Question 4: A (Retraining the model with more data and recent transaction patterns)
Question 5: D (Anonymization)
Question 6: C (Using AI to optimize intrusion detection thresholds based on reward feedback)
Question 7: A (Machine learning operations (MLOps))
Question 8: C (Adversarial training)
Question 9: B (Data security and privacy)

Question 10: D (Explainability)