Domain 5 Quiz: Identity and Access Management (IAM)
100 Câu hỏi · Authentication · Access Control · PAM · Identity Lifecycle
100
Questions
5
Topic Areas
Easy–Hard
Difficulty Range
Authentication Factors & MFA
Which authentication factor category does a fingerprint scan belong to?
Correct Answer: C
Biometrics (fingerprint, iris, retina, facial recognition, voice) are Type 3 — "something you are." Type 1 = passwords/PINs, Type 2 = tokens/smart cards/OTPs, Type 4 (location) is sometimes cited but is not a classic CISSP factor category.
CISSP mindset: The three primary factor types are the foundation. Memorize which category each credential type falls into before tackling MFA questions.
Partner E Lhuillier's mobile app requires the user to enter an MPIN (Mobile PIN) and then scan their fingerprint. A security auditor claims this is true Multi-Factor Authentication. Is the auditor correct?
Correct Answer: B
MPIN is Type 1 (something you know) and fingerprint is Type 3 (something you are). Because they belong to two different factor categories, this IS true MFA. The distractor in answer C is wrong — factor type is not determined by storage location. Answer D is wrong because two-step verification using the same factor type (e.g., password + security question) is NOT MFA.
FinTech Company X context: The Partner E MPIN + biometric combination is genuine MFA precisely because the two factors are from different categories. Contrast with SMS OTP + registered phone number, which are both Type 2.
A bank requires users to provide a password AND answer a secret security question before logging in. Which statement best describes this authentication scheme?
Correct Answer: C
Both a password and a security question answer fall under Type 1 — something you know. Requiring two Type 1 factors is two-step verification, not multi-factor authentication. True MFA requires factors from at least two different categories. NIST SP 800-63B has deprecated security questions as a valid authenticator, but the primary CISSP reason this fails as MFA is the same factor type.
Key trap: "multi-step" does not equal "multi-factor." CISSP always tests this distinction.
A hospital is selecting a biometric system for access to its data center. The system must minimize the risk of unauthorized entry above all other concerns. Which biometric error rate should be minimized?
Correct Answer: B
FAR (False Acceptance Rate) measures how often the system incorrectly grants access to an unauthorized user. For a high-security environment like a hospital data center, minimizing FAR (unauthorized entry) is paramount. A lower FAR typically increases FRR, meaning legitimate users may be rejected more often — an acceptable trade-off for security. CER/EER is used to compare biometric systems on overall accuracy, not to optimize for security specifically.
CISSP mindset: When the question says "minimize unauthorized access," think FAR. When it says "minimize inconvenience to legitimate users," think FRR. CER is for comparison only.
System A has a CER of 3% and System B has a CER of 7%. Which system is more accurate, and why?
Correct Answer: B
The Crossover Error Rate (CER), also called Equal Error Rate (EER), is the point where FAR equals FRR. A lower CER means the system achieves the crossover point with fewer total errors, indicating a more accurate and better-performing biometric system. CER is the primary metric for comparing biometric systems — lower is always better overall.
Memory trick: CER = quality benchmark. Lower CER = better biometric accuracy. It does not mean FAR is lower at all sensitivity settings — only at the specific crossover point.
An attacker prints a high-resolution photograph of a registered user and holds it in front of the facial recognition camera to gain access. What control would specifically prevent this attack?
Correct Answer: B
The described attack is a "presentation attack" or "spoofing attack" where a static artifact (photo, silicone mask) is used to fool the biometric sensor. Liveness detection — techniques that verify the biometric comes from a living person (e.g., blink detection, 3D depth mapping, random gesture prompts) — is the specific control that defeats presentation attacks. Higher resolution or FAR thresholds do not distinguish between a live face and a photo.
FinTech Company X context: Liveness detection is critical for Platform C's facial recognition during loan onboarding. Without it, a photo of the customer could be used to fraudulently complete eKYC.
According to NIST SP 800-63B, which password policy practice is considered outdated and should be DISCONTINUED?
Correct Answer: C
NIST SP 800-63B revised guidance states that organizations should NOT require arbitrary periodic password changes unless there is evidence of compromise. Forced rotation leads to predictable patterns (Password1! → Password2!) and user frustration. NIST recommends: minimum 8 chars (A is correct practice), screening against breach databases (B is correct), and allowing all printable ASCII including spaces (D is correct per NIST).
CISSP mindset: NIST 800-63B is the modern standard. Exam questions may present old "best practices" (90-day rotation, complexity rules) as correct — know that NIST has deprecated them.
A developer is implementing passwordless authentication using FIDO2/WebAuthn. Which statement correctly describes how FIDO2 protects against phishing?
Correct Answer: B
FIDO2/WebAuthn is inherently phishing-resistant because the credential (public/private key pair) is bound to the specific relying party origin (e.g., bank.com). A phishing site (e.g., b4nk.com) would present a different origin, and the authenticator would refuse to sign — the credential literally cannot be used on a different domain. This is fundamentally different from passwords or OTPs, which can be relayed in real time by a phishing proxy.
Key differentiator: FIDO2 eliminates shared secrets entirely. The private key never leaves the authenticator device, and origin binding prevents cross-site credential reuse.
Which combination represents TRUE multi-factor authentication using three different factor categories?
Correct Answer: B
B uses all three factor types: Smart card (Type 2 — something you have) + Password (Type 1 — something you know) + Fingerprint (Type 3 — something you are). A uses three Type 1 factors. C uses three Type 2 factors (all are possession-based OTPs). D also uses three Type 2 factors (all token variants).
Always classify each factor by category, not by the number of steps. Three steps of the same type = multi-step, not multi-factor.
Platform C's OTP service returns the same "Invalid OTP" error message whether the phone number does not exist in the system or the OTP code is incorrect. From an IAM perspective, why is this the correct behavior?
Correct Answer: B
Returning different error messages for "account not found" vs. "wrong password/OTP" leaks information that lets attackers enumerate valid accounts. By returning the same generic error for both conditions, the system prevents user/account enumeration attacks. This is a standard OWASP and NIST 800-63B recommendation. The other answers are wrong: GDPR data minimization applies to data collection, not error messages; and generic errors actually inconvenience users rather than simplifying the flow.
FinTech Company X context: Platform C's anti-enumeration error response is a deliberate security design. This pattern is required for any public-facing authentication endpoint.
An organization wants to deploy biometric authentication at its main entrance. They need the system to accommodate employees who may have minor cuts or injuries that could affect readings. Which biometric modality is MOST tolerant of minor physical changes to the capture surface?
C
Iris recognition is highly stable and consistent — the iris pattern does not change with age and is not affected by minor injuries to hands or face. Fingerprints can be affected by cuts, burns, or dirt (high FRR risk). Retina scanning requires close proximity to the reader and is affected by certain medical conditions. Facial recognition can be disrupted by injuries, masks, or lighting. Iris recognition provides an excellent balance of accuracy, stability, and hygiene.
Biometric selection: Match the modality to the operational context. Injuries to hands → avoid fingerprints. Close-contact devices unacceptable → avoid retina. Lighting variability → avoid facial recognition.
Platform C enforces single-device binding — a user can only have one active device registered at a time. If the user registers a new device, the old device is automatically deregistered. What primary security goal does this control achieve?
Correct Answer: C
Single-device enforcement ensures that only one trusted device is bound to each account at any time. This prevents credential sharing (giving access to a friend/family member), reduces the attack surface by limiting the number of valid authenticators, and ensures that if a device is compromised, reregistering immediately invalidates the old device. The control is primarily about account exclusivity and preventing parallel unauthorized sessions, not SIM-swapping or FIDO2 compliance.
FinTech Company X context: For micro-lending apps like Platform C, single-device enforcement also provides a forensic trail — all actions can be attributed to the known device, supporting non-repudiation.
A security badge with an embedded chip that must be physically tapped against a reader belongs to which authentication factor category?
Correct Answer: B
A physical badge, smart card, hardware token, or key fob is a possession-based factor — Type 2, something you have. The key characteristic: the authenticator is a physical object that can be lost, stolen, or transferred.
Type 2 items can be transferred (unlike Type 3 biometrics) — this is why MFA typically combines Type 2 with another factor.
A user authenticates to their bank by entering their mobile phone number and then entering an OTP sent via SMS to that same phone number. A security consultant argues this is NOT true MFA. Which is the best justification for this position?
Correct Answer: B
The phone number (which identifies the account) and the SMS OTP (delivered to that phone) both depend on the same possession — the mobile device. Both are Type 2 factors. True MFA requires factors from different categories. NIST 800-63B does NOT fully deprecate SMS OTP — it restricts it to specific assurance levels but it remains usable. The consultant's argument is about factor category, not SMS security.
Classic CISSP trap: Phone number + SMS OTP looks like two steps but is actually one factor category (Type 2). Know this pattern cold.
According to NIST SP 800-63B, when SHOULD an organization require a user to change their password?
Correct Answer: C
NIST SP 800-63B (Section 5.1.1) explicitly states that verifiers SHOULD NOT require password changes except when there is evidence of compromise. Arbitrary periodic rotation (90 days, 180 days, annual) is specifically called out as counterproductive by NIST, as users respond by making predictable minor variations. Force a change only when compromise is suspected or confirmed.
This is a frequently tested NIST reversal. The exam may present the old "90-day rule" as the correct answer — trust NIST 800-63B over older practices.
The MOST important security consideration during the biometric enrollment (registration) process is:
Correct Answer: B
The biometric system is only as trustworthy as the identity established at enrollment. If an impostor enrolls their biometric under another person's identity, the system will correctly authenticate the impostor as the victim forever. Identity proofing during enrollment — verifying government ID, in-person verification, etc. — is the foundational security requirement. Technical controls (encryption, resolution, multiple samples) are important but secondary to the identity assurance of who is being enrolled.
CISSP mindset: Garbage in, garbage out. The best biometric sensor is useless if the wrong person enrolled under the target identity.
A user normally logs in from Vietnam during business hours. One morning, a login attempt comes from an IP address in Eastern Europe at 3 AM. The system flags this for additional verification. This is an example of:
Correct Answer: B
Adaptive or risk-based authentication dynamically adjusts the authentication requirements based on contextual risk signals — unusual location, time of day, device fingerprint, velocity of logins. When risk is elevated (anomalous login pattern), the system steps up to require additional authentication. This is different from always-on MFA (two-factor authentication), MAC policy enforcement, or continuous authentication (monitoring throughout a session).
Adaptive authentication is closely related to ABAC and context-dependent access — the system uses context attributes to make real-time decisions.
In the FIDO2/WebAuthn framework, what is the role of the "authenticator attestation" during registration?
Correct Answer: B
Authenticator attestation in FIDO2 allows the relying party (e.g., a bank's server) to cryptographically verify the make, model, and certification level of the authenticator being registered (e.g., YubiKey 5, Apple Face ID, etc.). This lets high-security relying parties enforce policies — e.g., "only FIPS 140-2 certified hardware authenticators are accepted." Biometric templates never leave the authenticator device in FIDO2; they are never sent to the server.
FIDO2 key principle: biometrics stay on the device. The server only ever stores the public key. Attestation is about device trust, not user identity proofing.
Which password storage technique provides the STRONGEST protection against offline password cracking attacks if the database is stolen?
Correct Answer: C
Salted adaptive hashing (bcrypt, scrypt, Argon2) is the current best practice for password storage. "Adaptive" means the cost factor can be increased as hardware improves. The salt prevents rainbow table attacks. Argon2 (NIST-recommended) was designed specifically to be memory-hard, resisting GPU/ASIC cracking. AES encryption (A) is reversible — if the key is stolen, all passwords are exposed. MD5 (B) and SHA-256 (D) are fast hashes with no built-in work factor, making them highly susceptible to GPU-accelerated brute force.
Key: Encryption is reversible. Fast hashes (MD5/SHA) are crackable at billions of guesses/second. Adaptive hashing (bcrypt/Argon2) is designed to be computationally expensive by design.
The Platform C platform uses JWT tokens signed with RSA-256. Which key does the server use to SIGN the JWT, and which key does the relying party (e.g., a lender API) use to VERIFY it?
Correct Answer: B
In asymmetric cryptography used for JWT signatures (RS256), the issuer (Platform C authorization server) signs the token with its RSA PRIVATE key. Any relying party can verify the token's authenticity using Platform C's RSA PUBLIC key (obtained from Platform C's JWKS endpoint). This ensures only Platform C can issue valid tokens (private key is secret), but anyone with the public key can verify them. This is the opposite of encryption (where you encrypt with the recipient's public key).
FinTech Company X context: Platform C publishes its public key at a JWKS (JSON Web Key Set) endpoint. Lender systems fetch this to validate tokens without needing Platform C's private key — enabling secure federated trust.
SSO & Federation
In Kerberos authentication, what is the role of the Key Distribution Center (KDC)?
Correct Answer: B
The Kerberos KDC consists of two logical components: the Authentication Server (AS), which issues the Ticket Granting Ticket (TGT) after verifying the user's credentials; and the Ticket Granting Server (TGS), which issues service tickets in exchange for a valid TGT. The KDC is a trusted third party — it never passes plaintext passwords on the network, and it does not encrypt all traffic or manage ACLs.
Kerberos flow: Client → AS (get TGT) → TGS (exchange TGT for service ticket) → Resource Server (present service ticket). No passwords on the wire after initial authentication.
What is the PRIMARY architectural weakness of a Kerberos deployment with only a single KDC?
Correct Answer: B
Because all Kerberos authentication flows through the KDC, a single KDC is a single point of failure (SPOF). If it goes down, nobody can log in or access services. Best practice is to deploy multiple KDCs (a primary and one or more replicas/slaves) for high availability. Ticket replay (A) is mitigated by timestamps and authenticators. Symmetric encryption (C) is not inherently weaker — its appropriateness depends on key management. Ticket lifetime (D) is configurable and not a fundamental weakness.
CISSP mindset: Whenever you see centralized authentication infrastructure, think SPOF and redundancy. The KDC is also a high-value target — if compromised, the attacker can forge tickets (Golden Ticket attack).
A company wants to allow users to log in to their SaaS application using their corporate identity provider credentials (SSO), and the protocol must carry the user's identity assertion (who they are). Which protocol is MOST appropriate?
Correct Answer: B
For SSO with identity assertion (authentication), SAML 2.0 and OIDC are the correct protocols. SAML 2.0 is the enterprise-standard XML-based protocol for browser SSO. OIDC is a modern identity layer built on top of OAuth 2.0 that adds authentication (ID tokens). OAuth 2.0 alone (A) is an authorization framework — it grants access to resources but does NOT inherently identify the user (no identity token). RADIUS (C) is for network access authentication. LDAP (D) is a directory protocol, not a federation/SSO protocol.
Critical distinction: OAuth 2.0 = authorization (what you can do). OIDC = authentication (who you are) built on OAuth. SAML = authentication assertion (enterprise SSO). This is heavily tested.
A mobile app wants to access a user's Google Calendar data without requiring the user to share their Google password with the app. Which protocol enables this pattern?
Correct Answer: B
OAuth 2.0 is designed specifically for delegated authorization — allowing a third-party application to access a resource (Google Calendar) on behalf of a user without the user revealing their credentials to the third party. The user authorizes the app at Google, Google issues an access token, and the app uses that token to access the API. SAML (A) is primarily for browser-based SSO authentication flows, not API authorization. Kerberos (C) and LDAP (D) are not designed for this delegated authorization pattern.
OAuth 2.0 scenario keywords: "access data on behalf of," "without sharing password," "delegated access," "access token." These all point to OAuth.
A JWT token header contains {"alg": "none", "typ": "JWT"}. What is the security implication?
Correct Answer: B
The "alg: none" JWT attack is a critical vulnerability. JWTs consist of header.payload.signature. If a server naively trusts the "alg" field in the header and accepts "none" as a valid algorithm, it skips signature verification entirely. An attacker can craft a JWT with any claims (e.g., "role": "admin"), set alg=none, and submit it with an empty signature — and the vulnerable server will accept it as authentic. Servers MUST enforce a whitelist of acceptable algorithms and reject "none" outright.
FinTech Company X context: Platform C's JWT validation must explicitly whitelist "RS256" and reject any other algorithm including "none." Library-level fixes exist but must be configured correctly.
A developer claims that because JWT tokens are Base64-encoded, the payload data is secure and cannot be read by unauthorized parties. Is this correct?
Correct Answer: B
Base64 is an encoding scheme, not encryption — it is trivially reversible. A standard signed JWT (JWS — JSON Web Signature) has its payload readable by anyone who holds the token; the signature only proves authenticity, not confidentiality. Sensitive data (PII, secrets) should NEVER be placed in a JWT payload unless it is an encrypted JWT (JWE — JSON Web Encryption). TLS (C) protects data in transit only — the token can be read if stored or logged.
FinTech Company X context: Platform C JWTs should never contain sensitive PII like full loan amounts or personal identifiers in the payload. Use opaque identifiers and fetch data server-side if needed.
An organization uses SAML-based SSO. When a new employee first accesses a SaaS application via SSO, an account is automatically created in the SaaS application based on the SAML assertion attributes. This process is called:
Correct Answer: B
Just-In-Time (JIT) provisioning automatically creates a user account in the service provider (SaaS application) the first time a user authenticates via SSO, using attributes from the identity provider's assertion (name, email, role, department). This eliminates the need for pre-provisioning accounts before first use. Identity federation (C) is the broader concept of trusting cross-domain identities; JIT provisioning is one implementation pattern within federation.
JIT provisioning is common in cloud environments — no manual account creation required. However, deprovisioning must still be handled (when the SSO is revoked, the SaaS account should also be deactivated).
An organization is evaluating cloud-based Identity-as-a-Service (IDaaS) solutions (e.g., Okta, Azure AD, Google Identity). Which is the PRIMARY security benefit of a centralized IDaaS for enterprise SSO?
Correct Answer: B
The primary security benefit of a centralized IDaaS is the unified control plane — a single place to enforce MFA policies, conditional access rules, password policies, and capture authentication audit logs across all connected applications. This means enabling or disabling access for an employee affects all apps simultaneously (critical for offboarding). It does not eliminate all on-premises infrastructure (A), encrypt application data (C), or fully eliminate phishing since account takeover at the IdP is still possible (D).
IDaaS trade-off: Centralized = single control point = single point of failure and single point of compromise. The IDaaS becomes a crown jewel target.
An attacker who has compromised a low-privilege domain account requests Kerberos service tickets for high-value service accounts (e.g., SQL Server service account) and then takes those tickets offline to crack. What attack is described, and what is the primary mitigation?
Correct Answer: B
Kerberoasting: any authenticated domain user can request service tickets for any SPN-registered service account. The ticket is encrypted with the service account's password hash. The attacker extracts the ticket and performs offline brute-force/dictionary attacks — no lockout, no detection by default. Mitigations: (1) Use long random passwords (25+ chars) for service accounts — cracking becomes computationally infeasible. (2) Use Group Managed Service Accounts (gMSAs) — 240-char random passwords auto-rotated by AD. (3) Monitor for unusual TGS requests in SIEM.
Kerberoasting is devastating because it requires no special privileges — any domain user can do it. The fix is about password complexity on the service account, not network-level controls.
Which statement CORRECTLY differentiates SAML 2.0 from OpenID Connect (OIDC)?
Correct Answer: A
SAML 2.0 uses XML-based assertions and is the dominant enterprise SSO protocol, especially for browser-based applications and legacy systems. OIDC is a modern, JSON/JWT-based identity layer built on top of OAuth 2.0, designed for web apps, mobile apps, and APIs. OIDC is generally easier to implement for developers and better suited to mobile contexts. Both are authentication protocols (B is wrong). OIDC IS built on OAuth 2.0 (C) but SAML is not "self-contained" — it also requires IdP/SP trust configuration. They are NOT interchangeable in all scenarios (D).
Enterprise = SAML 2.0. Modern/mobile = OIDC. Both achieve SSO authentication — the choice is about ecosystem and format.
In a federated identity system, Company A (the Identity Provider) and Company B (the Service Provider) establish a trust relationship. What does this trust relationship enable?
Correct Answer: B
Federated identity enables cross-domain authentication — Company A's authenticated identity is trusted by Company B, allowing users to access B's services with their A credentials. The SP (Company B) does not have the user's credentials; it trusts the IdP's (Company A's) assertion. The IdP authenticates but does not make authorization decisions inside the SP (C is wrong). There is no shared credential database (D). The SP cannot modify IdP user accounts (A).
Federation = identity portability across organizational boundaries without password sharing. The key word is "trust" — the SP trusts the IdP's assertion.
Platform C issues per-lender scoped JWT tokens where each token contains a lender_id claim and the token is only valid for API calls to that specific lender's endpoints. Which access control model does this token scoping implement?
Correct Answer: C
Per-lender scoped tokens implement ABAC — the access decision is based on the lender_id attribute embedded in the token being evaluated against the policy "this token may only call endpoints for lender_id=X." ABAC policies evaluate subject attributes (token claims), resource attributes (which lender endpoint), and environmental context to make fine-grained decisions. While roles could also be used (RBAC), the dynamic attribute-based scoping per token is the defining characteristic of ABAC here.
FinTech Company X context: Platform C's per-lender token scoping is a real-world ABAC implementation. The token attribute (lender_id) drives the authorization decision at the API gateway level.
What is the PRIMARY security risk introduced by Single Sign-On (SSO)?
Correct Answer: C
The fundamental SSO trade-off: while SSO reduces the attack surface from password reuse and simplifies user experience, it creates a high-value single point of compromise. A stolen SSO session token or compromised IdP credential provides access to every connected application. This is why SSO MUST be protected with strong MFA, short session lifetimes, and anomalous behavior detection. Answers A and B describe SSO's benefits being lost (wrong direction). D is incorrect — conditional access policies allow different requirements per application.
SSO security principle: Protect the IdP with the strongest available controls because it becomes the master key to all connected services.
In OAuth 2.0, what do "scopes" represent?
Correct Answer: B
OAuth 2.0 scopes define the granular permissions being requested and granted. For example, scope=calendar:read authorizes read-only access to the calendar; scope=calendar:write adds write access. The principle of least privilege applies — request only the scopes actually needed. Scopes implement fine-grained authorization delegation and are presented to the user during the consent screen. They are not related to geography, time, or cryptography.
FinTech Company X context: Platform C's per-lender tokens should use narrow scopes — e.g., only the data fields a specific lender is authorized to access, not blanket API access.
In Kerberos, after the user obtains a Ticket Granting Ticket (TGT), what is the purpose of presenting the TGT to the Ticket Granting Server (TGS)?
Correct Answer: B
The TGT is a "proof of prior authentication" — it tells the TGS "I already authenticated with the AS, so I don't need to re-enter my password." The user presents the TGT to the TGS to request a service ticket for a specific resource. The TGS issues the service ticket encrypted with the target service's secret key. The user then presents the service ticket directly to the target service for access. This allows seamless access to multiple services (SSO) with a single initial login.
Kerberos enables SSO precisely because the TGT eliminates re-authentication for each service. Protecting TGT theft (Golden Ticket, Pass-the-Ticket) is critical.
In SAML SSO, which flow has higher security risk and why?
Correct Answer: B
In SP-Initiated SSO, the SP generates an AuthnRequest with a unique ID, and the IdP's assertion must reference that ID (InResponseTo). This binding prevents an attacker from replaying a stolen assertion to a different SP. In IdP-Initiated SSO, no AuthnRequest is generated — the IdP pushes an unsolicited assertion. Without the InResponseTo check, SP implementations may be vulnerable to assertion injection if they accept any valid assertion regardless of whether they requested authentication. Many SPs refuse IdP-initiated SSO for this reason.
Prefer SP-Initiated SAML for stronger replay protection. If IdP-Initiated is required, ensure the SP implements audience restriction and assertion replay checks.
Platform C's lender API tokens have a short expiry (15 minutes) with a refresh token (valid 24 hours) that can be exchanged for a new access token. What security benefit does the SHORT access token lifetime provide?
Correct Answer: B
Short-lived access tokens limit the blast radius of token theft. If an attacker intercepts or steals an access token, they can only use it until it expires — 15 minutes provides a very narrow window. The longer-lived refresh token is stored more securely (e.g., HTTP-only secure cookie, not in localStorage) and is used less frequently — only to obtain new access tokens — reducing its exposure. This separation of concerns (short access token + long refresh token) is the OAuth 2.0 best practice for balancing security and usability.
CISSP principle: Time-limit all credentials. Short token lifetime = limited exposure window. The refresh token is the higher-value credential to protect.
In a federated identity system using SAML, what is the "circle of trust"?
Correct Answer: B
The "circle of trust" in federated identity (particularly in SAML/Liberty Alliance terminology) refers to the collection of identity providers and service providers that have established mutual trust agreements, share federation metadata, and agree to honor each other's authentication assertions within defined policies. For example, a consortium of universities (eduGAIN) or healthcare organizations may form a circle of trust where each institution's credentials are accepted by all member services.
Trust in federation is explicit and contractual — technical trust (metadata exchange) must be backed by legal/business agreements defining responsibilities and liability.
In OpenID Connect, what is the purpose of the ID Token versus the Access Token?
Correct Answer: B
In OIDC: The ID Token is a JWT containing claims about the authenticated user (sub, name, email, etc.) — it is consumed by the client application to know who the user is (authentication). The Access Token is used to call APIs on behalf of the user — the resource server validates it to decide whether to fulfill the request (authorization). The client should NOT send the ID token to APIs; that is an anti-pattern. The Access Token should NOT be parsed by clients for identity information — use the ID Token for that.
OIDC design: ID Token = authentication artifact for the client. Access Token = authorization artifact for the resource server. Mixing them up is a common implementation mistake.
AS-REP Roasting targets Active Directory accounts where Kerberos pre-authentication is disabled. What does an attacker gain by targeting such accounts, and what is the mitigation?
Correct Answer: B
When Kerberos pre-authentication is disabled for an account (DONT_REQ_PREAUTH flag), the KDC will respond to authentication requests for that account without requiring the requestor to prove knowledge of the password first. The AS-REP response contains encrypted data using the account's password-derived key — an attacker can request this for any such account and then attempt to crack it offline (similar to Kerberoasting but targeting user accounts). Mitigation: ensure Kerberos pre-authentication is required for ALL accounts unless there is a specific technical reason (e.g., legacy Kerberos clients).
Distinguish: Kerberoasting = requesting service tickets for service accounts (requires domain auth). AS-REP Roasting = requesting AS-REP for accounts with pre-auth disabled (requires NO authentication at all).
Access Control Models
A file system where the file owner can grant or revoke read/write/execute permissions to other users at their discretion is an example of which access control model?
Correct Answer: C
Discretionary Access Control (DAC) allows the owner/creator of a resource to determine who has access to it and what level of access they have. Unix/Linux file permissions and Windows NTFS permissions are classic examples. The key word is "discretion" — the owner decides. In MAC, a central authority (system policy/labels) makes access decisions — owners cannot override them. RBAC assigns permissions to roles, not to individual owners. ABAC evaluates attributes.
DAC weakness: if the owner grants too much access (or is social-engineered), data leaks. There is no system-enforced ceiling. This is why MAC is used in high-security environments.
A government intelligence system classifies all data with security labels (Top Secret, Secret, Confidential, Unclassified) and all users have clearance levels. Access is granted only when the user's clearance level is equal to or greater than the data's classification label, and users cannot change access permissions. This is which access control model?
Correct Answer: B
MAC is characterized by: (1) system-enforced labels/classifications on data, (2) clearances assigned to subjects, (3) access decisions made by the system based on label comparison — NOT by the resource owner. Users cannot grant access that exceeds the system policy. This is the Bell-LaPadula model in practice. MAC is used in government and military environments where information flow control is critical. Owners have no discretion over access — hence "mandatory."
MAC keyword checklist: labels + clearances + system enforced + no user discretion = MAC. If the owner can change permissions, it is DAC.
FinTech Company X's internal admin portal assigns permissions based on job functions: "Loan Underwriter," "Risk Analyst," "Compliance Officer," and "Data Engineer" — each with different access rights. Adding a new employee involves assigning them to the appropriate function group, which automatically grants the right permissions. This is which access control model?
Correct Answer: C
Role-Based Access Control (RBAC) maps permissions to roles that represent job functions or responsibilities. Users are assigned to roles — they inherit permissions from the role, not from individual grants. This is exactly what the admin portal uses. Adding a user to "Loan Underwriter" grants all permissions that role has, without needing to specify each permission individually. RBAC is the dominant model for enterprise applications because it scales well, simplifies administration, and enforces least privilege through role design.
FinTech Company X context: RBAC in the admin portal supports audit and access review — you review roles, not individual user-permission pairs. This makes access recertification manageable.
A healthcare system grants doctors access to patient records only during their scheduled shift hours, only for patients assigned to their department, and only when accessing from a hospital-approved device. This access control scenario is BEST described as:
Correct Answer: C
ABAC is the correct model when access decisions depend on multiple dynamic attributes evaluated together: subject attributes (doctor, department), resource attributes (patient, record type), action attributes (read vs. write), and environmental/contextual attributes (time, device, location). Pure RBAC would grant access based on role alone — it cannot enforce time-of-day or device restrictions without becoming ABAC. The combination of multiple attribute conditions in a policy is the hallmark of ABAC.
ABAC keyword: "only when [condition1] AND [condition2] AND [condition3]." Multiple simultaneous conditions = ABAC. Single condition on a role = RBAC.
An employee has worked in three different departments over five years. Each time they transferred, they received the new department's access rights but never had the previous department's access removed. This is an example of:
Correct Answer: B
Privilege creep (also called access creep or access accumulation) occurs when a user accumulates access rights over time — through role changes, project assignments, or department transfers — without having unnecessary access removed. The result is a user with far more access than their current role requires, violating least privilege. This is detected and corrected through periodic access reviews/recertification. The JML (Joiner-Mover-Leaver) process should specifically revoke old access when an employee moves (Mover).
Privilege creep is one of the most common real-world IAM failures. The fix is not just preventing new grants — it is actively removing old ones when roles change.
Which statement CORRECTLY distinguishes "need-to-know" from "least privilege"?
Correct Answer: A
Need-to-know is about data/information access — a user should only access information they genuinely need to perform their duties, even if their clearance level would technically permit access. Least privilege is about system and functional permissions — accounts, processes, and services should have only the minimum rights required for their function. In practice, they are complementary: need-to-know restricts information access horizontally, while least privilege restricts functional capabilities. MAC systems enforce both through labels and clearances.
Example: A Secret-cleared analyst has clearance to access all Secret data (clearance level satisfied), but need-to-know restricts them to only the specific compartments relevant to their assignment.
An Access Control List (ACL) is attached to a resource and lists which subjects can access it. A capability list (or capability ticket) is held by the subject and lists which resources they can access. What is the PRIMARY practical difference in management?
Correct Answer: B
ACLs (resource-centric): to find out who can access a resource, look at the ACL on that resource. To revoke all access for a user, you must scan every ACL in the system and remove the user's entry — difficult at scale. Capability lists (subject-centric): to see what a user can access, look at their capability list. Revoking a user is easy (delete their list). But finding all users who can access a specific resource requires scanning all capability lists. Neither is inherently more secure (A is wrong). Both can implement DAC or MAC depending on how they're used (C is wrong).
Real-world systems often use ACLs (NTFS, S3 bucket policies) because the question "who can access this file?" is more operationally common than "what can this user access?" (handled by RBAC/ABAC roles).
A large hospital needs access control for its EHR system. Doctors, nurses, and administrators have well-defined, stable job roles that rarely change, and the primary access differentiation is by job function. Which access control model is MOST appropriate?
Correct Answer: C
When roles are stable, well-defined, and map cleanly to job functions, RBAC is the most appropriate and manageable model. The hospital's three roles (Doctor, Nurse, Administrator) each have distinct, predictable access needs. RBAC scales well, simplifies provisioning, and makes audit straightforward. ABAC would be unnecessarily complex for this scenario — use ABAC when multiple dynamic attributes are needed beyond role. The question doesn't indicate time/device/location constraints that would necessitate ABAC.
Selection rule: If "role alone" determines access in a stable organizational structure, use RBAC. If multiple conditions (time, location, data sensitivity, department) all combine to determine access, use ABAC.
XACML (eXtensible Access Control Markup Language) is typically associated with which access control model, and what role does it play?
Correct Answer: B
XACML is the OASIS standard policy language for ABAC implementations. It provides: (1) a request/response protocol (subject, resource, action, environment attributes), (2) a policy language for expressing fine-grained rules, (3) a Policy Decision Point (PDP) architecture that evaluates requests against policies. XACML enables complex, attribute-rich access policies that RBAC alone cannot express. It is the standard reference architecture for enterprise ABAC implementations.
XACML architecture: PAP (Policy Administration Point) creates policies. PDP (Policy Decision Point) evaluates requests. PEP (Policy Enforcement Point) enforces the decision. PIP (Policy Information Point) provides attribute data.
Which statement CORRECTLY distinguishes content-dependent access control from context-dependent access control?
Correct Answer: A
Content-dependent access control restricts access based on the actual DATA content of a resource — e.g., a database view that only returns records matching the user's department, or a policy that restricts access to records where salary > threshold. Context-dependent access control restricts access based on environmental or situational attributes — time of day, source IP, device health, geolocation. Both are forms of ABAC. Example: a nurse can access patient records (role = content-dependent by patient assignment) only during their shift (time = context-dependent).
Content = WHAT the data contains. Context = WHERE/WHEN/HOW the request is made. ABAC combines both for fine-grained access decisions.
A financial services firm needs a system where access to trade execution is allowed only if: the user has the "Trader" role AND the trade value is below their approved limit AND the market is currently open AND the request originates from a corporate network. Which model best fits?
Correct Answer: C
This scenario has four simultaneous conditions that ALL must be satisfied: role attribute (Trader), resource attribute (trade value vs. limit), environmental attribute (market hours), and network attribute (corporate network). RBAC alone can only evaluate the role; it cannot natively enforce trade value limits, market hours, or network origin restrictions. ABAC policies combine all four attributes in a single policy statement. This is a classic ABAC use case in high-compliance financial systems.
If you see "AND [condition]" chains beyond just role, ABAC is the answer. RBAC is necessary but not sufficient when multiple non-role conditions must all be satisfied.
In an RBAC system, "separation of duties" can be enforced through which mechanism?
Correct Answer: B
In RBAC, separation of duties is enforced through Static Separation of Duty (SSOD) constraints that declare certain roles mutually exclusive — a user cannot hold both simultaneously. For example, a user cannot be assigned both the "Accounts Payable Clerk" role and the "Payment Approver" role — holding both would allow them to create and approve their own payments (a classic fraud scenario). RBAC systems (like those in modern ERP systems) encode these constraints in role definitions.
RBAC SoD implementation: define conflict pairs during role design, not at individual user assignment time. This enforces the control systematically, not ad-hoc.
A startup with 20 employees has three access levels: Admin, Developer, and Read-Only. Which access control model is most practical?
Correct Answer: C
For a small organization with three simple, stable roles, RBAC is both sufficient and most practical. The administrative overhead of ABAC (defining attribute schemas, policy rules, attribute data stores) is disproportionate for this scale. ABAC's value emerges at enterprise scale with complex, dynamic access requirements. MAC is impractical without a formal classification scheme. DAC (owner-managed permissions) tends toward inconsistency and security gaps as the organization grows. RBAC is the right-sized solution here.
CISSP mindset: Match the access control model to the organizational complexity. RBAC is the pragmatic default; ABAC for complex enterprises; MAC for high-security government/military.
FinTech Company X's data platform enforces: "A data scientist may access customer loan data only if they are assigned to the project, the data is anonymized unless their clearance is 'Researcher-PII', and the access occurs within the Philippines data boundary." How many distinct attribute categories are being evaluated?
Correct Answer: C
The policy evaluates three ABAC attribute categories: (1) Subject attributes — project assignment AND clearance level (Researcher-PII); (2) Resource attributes — whether the data is anonymized or contains PII; (3) Environmental attributes — data residency boundary (Philippines). While XACML formally includes "action" as a fourth attribute category, this specific policy does not include an action condition, making C the most accurate answer. This is a real-world ABAC policy for sensitive financial data with regulatory data localization requirements.
FinTech Company X context: Data boundary enforcement (Philippines data stays in Philippines) is a regulatory ABAC requirement under BSP and NDPC guidelines for financial data.
Which control is the MOST effective at detecting and remediating privilege creep on an ongoing basis?
Correct Answer: B
Periodic access reviews (recertification campaigns) are specifically designed to detect and remediate privilege creep. Managers review each user's current access and certify that it is still needed and appropriate. Users with excess access from previous roles are identified and their access is revoked. RBAC (A) prevents creep if implemented correctly from the start, but does not catch historical accumulation. Password changes (C) do not affect access rights at all. SoD (D) prevents conflicting roles but does not address accumulated excess access.
Access recertification frequency: quarterly for privileged accounts, semi-annually for standard accounts is a common standard. The manager (not IT) certifies — they know whether the access is still needed for the job.
An e-commerce platform wants to grant customer service representatives access to customer order information, but only for customers who have contacted support in the last 30 days and only during their assigned shift hours. Which model is MOST appropriate?
Correct Answer: B
This scenario requires three simultaneous attribute conditions: (1) User role = Customer Service Rep (subject attribute), (2) Customer last contacted support within 30 days (resource/data attribute — content-dependent), and (3) Current time is within the rep's shift hours (environmental attribute — context-dependent). RBAC alone would either grant access to ALL orders (too broad) or require individual ACL entries per customer (unscalable). ABAC policies elegantly express these multi-dimensional constraints.
Practical ABAC: role provides the baseline ("is a CSR") while additional attributes narrow the scope ("only relevant customers, only during shift"). This is least-privilege through fine-grained attribute control.
Zero Trust Architecture (ZTA) is often described as "never trust, always verify." Which access control principle most closely aligns with Zero Trust?
Correct Answer: C
Zero Trust eliminates the concept of implicit trust based on network location. Instead, every access request is evaluated based on: verified identity (strong authentication), device health/compliance status, behavioral analytics, least-privilege access, and micro-segmentation. This is ABAC applied continuously — every API call, every file access, every session is evaluated against current context. RBAC provides the role-based foundation, but ZTA requires the dynamic, context-aware evaluation of ABAC on top of it.
ZTA is ABAC in practice at internet scale. NIST SP 800-207 defines Zero Trust Architecture. Key principle: assume breach — verify explicitly, use least privilege, assume all networks are hostile.
The principle of "implicit deny" (or default deny) in access control means:
Correct Answer: C
Implicit deny (also called default deny or deny-all-then-permit) is a foundational security principle: unless an explicit allow rule grants access to a specific subject/resource/action combination, access is denied. This is safer than the alternative (allow all by default, explicitly deny threats) because unknown or new resources are automatically protected. Firewall rules, IAM policies (AWS, Azure), and ACLs use this principle — if no rule matches, the default is deny.
Security principle: always start with deny-all, then add minimal permissions. Never start with allow-all and try to restrict — you will inevitably miss something.
A government agency needs to enforce access based on: clearance level, project compartment classification, citizenship status, need-to-know determination, and current security alert level. Which model is MOST appropriate?
Correct Answer: C
Five simultaneous conditions all affecting the access decision = ABAC. Pure MAC handles clearance + classification labels but not compartments + citizenship + need-to-know + dynamic alert level in a unified policy. Pure RBAC cannot express attribute combinations without creating an explosion of roles (one per combination). ABAC (implemented through XACML or similar) can express a single policy: "Grant access IF clearance ≥ classification AND compartment matches AND citizenship = US AND need-to-know certified AND alert level ≤ YELLOW." Modern government systems implement MAC + ABAC hybrid models.
Real-world: US DoD systems use MAC for classification labels AND ABAC for compartments, citizenship, and context. They are complementary, not mutually exclusive.
Platform C's admin portal has a "Loan Officer" role. However, loan officers at Partner Lender A should only view their own lender's applications, while loan officers at Partner Lender B should only view Lender B's applications — the role alone is insufficient to determine access. What hybrid approach is most appropriate?
Correct Answer: B
The RBAC + ABAC hybrid is the correct and scalable approach. RBAC defines the role (Loan Officer = can view loan applications), while ABAC adds the data scoping attribute (lender_id in the token must match the application's lender_id). Option A creates role explosion — as lenders grow to 10, 20, 50, you have 50 roles. Option B keeps one "Loan Officer" role and adds a dynamic attribute filter. This is exactly how Platform C's per-lender scoped tokens work — the JWT carries the lender_id attribute that the authorization layer enforces.
FinTech Company X context: RBAC + ABAC hybrid is the production pattern in Platform C. Role = coarse authorization, attribute = fine-grained data isolation. This prevents cross-lender data leakage at the access control layer.
Privileged Access Management (PAM)
A team of five system administrators shares a single "root" account with a known, shared password to access production servers. What is the PRIMARY security problem with this practice?
Correct Answer: B
The PRIMARY problem with shared privileged accounts is the destruction of accountability and non-repudiation. When five people share one account, the audit log shows "root did X" — but which of the five? Forensic investigation, insider threat detection, and compliance reporting all become impossible. Each administrator should have their own named account with full audit trails, using PAM tools to check out privileged credentials or use sudo elevation from their personal account. Least privilege (C) is also violated but is secondary to the accountability failure.
CISSP principle: No shared accounts for privileged access, ever. Individual accountability requires individual accounts. PAM tools enable this at scale.
What is "Just-In-Time (JIT) privileged access" in the context of PAM?
Correct Answer: B
JIT privileged access (also called zero-standing-privilege) eliminates always-on privileged accounts. Instead, a user requests elevated access for a specific purpose (e.g., "patching server X for 2 hours"), the PAM system approves it, grants the elevated access, and automatically revokes it when the time expires or the task is complete. This dramatically reduces the attack surface — there are no standing privileged credentials to steal. PAM tools like CyberArk, BeyondTrust, and HashiCorp Vault enable JIT access workflows.
JIT privilege = zero standing privilege = minimize time-at-risk. If privileged access only exists for 2 hours, an attacker has only a 2-hour window to exploit it.
A "break-glass" (emergency access) account is used when normal PAM access workflows are unavailable. Which is the MOST important security control for break-glass accounts?
Correct Answer: B
Break-glass accounts are necessary for business continuity when PAM tools are unavailable, but they carry extreme risk — they are typically the highest-privileged accounts in the environment. The critical controls are: (1) Comprehensive, immutable logging of every use, (2) Immediate automated alerts to CISO/security leadership when the account is accessed, (3) Mandatory post-use review to verify the emergency was legitimate. The password should be very complex (A is wrong) — stored in a physical safe or sealed envelope. Usage should be extremely rare and always investigated.
Break-glass = emergency override of normal controls. The compensating control is extreme monitoring and accountability. Any unplanned use is a potential incident indicator.
A PAM solution records full video and keystroke logs of all privileged sessions. What security objectives does this control primarily serve?
Correct Answer: B
Privileged session recording serves three primary security functions: (1) Accountability — every action is attributable to a specific named user; (2) Forensics — recordings provide irrefutable evidence for incident investigations ("what exactly did the insider do?"); (3) Deterrence — administrators who know their sessions are recorded are less likely to perform unauthorized actions. This is similar to CCTV footage for physical security. The recording does not prevent commands (it is detective, not preventive) — for preventive control, use command filtering/blocking.
Detective control: Session recording detects and documents. Preventive control: Command filtering/whitelisting prevents. Use both in high-security PAM deployments.
What is a Privileged Access Workstation (PAW), and what threat does it mitigate?
Correct Answer: B
A Privileged Access Workstation (PAW) — also called a Secure Admin Workstation (SAW) — is a dedicated, locked-down device used exclusively for administrative work. By separating privileged sessions from general-use computing (email, web browsing), PAWs prevent privileged credential theft through malware on general-use workstations. Without PAWs, an admin who uses the same laptop to browse the web AND administer servers risks their admin credentials being stolen by malware. PAWs have no email, restricted network access, and are monitored aggressively.
PAW principle: Don't administer high-value systems from potentially compromised endpoints. Separate "clean" admin workspace from "dirty" general-use workspace physically or virtually.
FinTech Company X uses HashiCorp Vault to manage service credentials (database passwords, API keys, TLS certificates). Vault is configured to automatically rotate database credentials after each use. What security problem does auto-rotation solve that manual credential management cannot?
Correct Answer: B
Manual credential management relies on scheduled rotations (e.g., every 90 days) — creating a long window of exposure if credentials are stolen between rotations. Vault's dynamic secrets and post-use rotation mean: (1) credentials are unique per session/use, (2) old credentials are revoked immediately after use or on schedule, (3) stolen credentials become invalid quickly — minimizing the attacker's opportunity window. This effectively implements the principle of least privilege for time (credentials exist only as long as needed). Manual rotation is slow, error-prone, and infrequent.
FinTech Company X context: Vault dynamic secrets are a critical control for Platform C's database and API security. No long-lived static passwords = dramatically reduced credential theft risk.
An organization's SOC detects an unusual spike in TGS (Ticket Granting Service) requests from a single low-privilege domain account requesting service tickets for dozens of service accounts in a short period. What attack is MOST likely occurring?
Correct Answer: B
The described pattern is the signature of Kerberoasting: a single account making many TGS requests for service accounts in a short time. Normal users request service tickets only for services they actually use. An attacker Kerberoasting will request all service tickets available, then crack them offline. The anomaly is the volume and pattern of TGS requests — high volume, many different service accounts, from a single source account. SOC detection: monitor for TGS request volumes that are statistically anomalous, and alert on accounts requesting tickets for services they have never accessed before.
Kerberoasting detection: SIEM alert on TGS request anomalies. MITRE ATT&CK T1558.003. Prevention: service account long passwords + gMSAs. Detection: behavioral baseline on TGS requests.
Which PAM control directly implements the principle of least privilege for privileged accounts?
Correct Answer: C
Least privilege for privileged access means no "all or nothing" root/Domain Admin grants. Instead, administrators receive only the specific elevated permissions their role requires. A network administrator needs network device access, not database admin access. A DBA needs database privileges, not directory admin rights. PAM tools implement this through fine-grained privilege delegation. Strong passwords (A) are important but don't address scope of access. Logging (D) is detective, not preventive. Permanent blanket access (B) is the opposite of least privilege.
Principle: Even among privileged users, apply least privilege. "Admin" is not one thing — there are network admins, database admins, system admins, each with scoped privileges.
A PAM solution "vaults" privileged account credentials. What does this mean in practice?
Correct Answer: B
Credential vaulting in PAM solutions (CyberArk, BeyondTrust, Thycotic) means privileged credentials are stored in an encrypted, highly-secured central repository. When an administrator needs access, they authenticate to the PAM system (not the target system directly), the PAM system retrieves the credential and either: (1) injects it automatically into the session without the user ever seeing the password, or (2) provides a time-limited checkout with automatic rotation after return. This eliminates the ability of admins to misuse or share credentials they never actually know.
The most secure PAM implementations use "credential injection" — the admin never sees the password. This prevents screenshot theft, credential sharing, and post-session misuse.
A disgruntled database administrator copies the entire customer database to an external drive. Which PAM control would have MOST effectively prevented this?
Correct Answer: C
Prevention is always preferred over detection for high-impact incidents. Preventive controls: (1) Command-level restrictions preventing bulk data export commands (a privileged DBA who can only run specific maintenance queries cannot run "SELECT * FROM customers" into a file); (2) Data Loss Prevention (DLP) blocking data transfer to removable media; (3) USB port blocking on admin workstations. Session recording (A) detects but does not prevent — by the time you review the log, the data is already gone. MFA (B) verifies identity but does not restrict what the authenticated user can do. Awareness training (D) is important but not a technical control.
Insider threat: the hardest to prevent because the threat actor is authorized. Minimum necessary permissions (command-level scoping) and DLP are the primary preventive controls.
An attacker compromises the Domain Controller and extracts the KRBTGT account's password hash. The attacker uses this hash to forge TGTs for any user. What attack is this, and what is the primary mitigation?
Correct Answer: B
A Golden Ticket attack leverages the KRBTGT account's hash to forge arbitrary TGTs — the attacker can create TGTs for any user, including Domain Admins, with arbitrary expiry dates (including future dates far beyond normal lifetime). The attacker can persist for years. Mitigation post-compromise: rotate the KRBTGT password TWICE (because Kerberos maintains both the current and previous password for continuity — two rotations ensure no previously issued TGT remains valid). Critically, protect the DC first — if the attacker still has DC access, password rotation is futile.
Golden Ticket = the ultimate Kerberos compromise. The KRBTGT account is the crown jewel of Active Directory. Rotate it twice, in sequence, with appropriate planning (some services briefly fail during rotation).
A microservice in Platform C's backend uses HashiCorp Vault's AppRole authentication to retrieve its database credentials at startup. The AppRole RoleID is stored in the application configuration, and the SecretID is injected at deployment time via CI/CD. Why is this approach more secure than hardcoding the database password in the application configuration file?
Correct Answer: B
By using Vault AppRole, the config file contains the RoleID (non-secret, like a username) but NOT the actual database password. The real credential is: (1) fetched dynamically from Vault at runtime, (2) time-limited (lease duration), (3) rotated automatically. If an attacker exfiltrates the config file, they get the RoleID — which is useless without the SecretID and a valid Vault authentication. They do NOT get the database password. Contrast with hardcoded passwords: config file leak = immediate database access. Vault separates secret access from secret knowledge.
FinTech Company X context: Vault AppRole is the recommended pattern for CI/CD and containerized services. Never commit credentials to source code or config files — even encrypted ones should use secrets management.
How frequently should privileged account access reviews (recertification) be conducted, according to security best practice?
Correct Answer: C
Privileged accounts represent the highest risk in any environment — their compromise can lead to complete system takeover. Security standards (ISO 27001, NIST, SOC 2) typically recommend quarterly reviews for privileged accounts as a minimum. Standard user accounts may be reviewed semi-annually or annually. Some high-security environments review privileged access monthly or continuously (real-time UEBA). Annual reviews (A) leave too large a window where stale, excessive, or orphaned privileged accounts remain active. Reviews at role change (D) are necessary but not sufficient — dormant privileged access also needs regular review.
Risk-based review frequency: higher risk = more frequent review. Privileged = quarterly minimum. Standard = semi-annual/annual. Critical systems = monthly or continuous behavioral monitoring.
What is the key difference between a Golden Ticket and a Silver Ticket attack in Kerberos environments?
Correct Answer: B
Golden Ticket: uses the KRBTGT account hash → forges TGTs → attacker can access any service in the domain (unlimited scope). Silver Ticket: uses a specific service account hash → forges service tickets for THAT specific service only (e.g., SQL Server, file share). Silver Tickets are stealthier because they do NOT require communication with the KDC (TGT exchange) — the attacker goes directly to the service. However, Silver Tickets are scoped to one service. Detection: Silver Tickets are harder to detect because KDC logs show nothing (no TGT request).
Defensive implication: Protecting the KRBTGT prevents Golden Tickets. Protecting service account passwords (gMSAs with auto-rotation) prevents Silver Tickets for those services.
A third-party vendor needs temporary access to production servers to perform maintenance. Which is the MOST secure approach to granting this access?
Correct Answer: C
Third-party/vendor access carries significant supply chain risk. Best practice: (1) Time-limited access — account expires at the end of the maintenance window, (2) Scoped permissions — only the specific systems/functions needed for the maintenance, not broad admin access, (3) Full session recording for accountability, (4) MFA required — vendor must authenticate through your PAM system, (5) Automatic expiry — access disappears without any manual intervention required. Never share root passwords (A). Permanent accounts are forgotten and become orphaned (B). Using vendor credentials means you can't revoke access from your side (D).
Third-party access = highest insider threat risk (external, less accountable). Apply maximum PAM controls: time-limit, scope, record, MFA, auto-expire.
Identity Lifecycle & Protocols
An employee is called into HR at 2 PM on a Tuesday and informed they are being terminated effective immediately. The IT department is notified at 4 PM after the HR meeting concludes, and the employee's accounts are disabled the following morning. What is the PRIMARY security concern with this timeline?
Correct Answer: B
The CISSP standard: accounts for terminated employees should be disabled IMMEDIATELY upon the termination decision — ideally, IT disables the account before or simultaneously with the employee being notified. A terminated employee who knows they are fired but still has active access for hours or days is the highest-risk insider threat scenario. They may copy data, delete files, sabotage systems, or share credentials. The 18-hour window (4 PM to next morning) is unacceptable. Even a 2-hour window (2 PM to 4 PM) is risky. Best practice: HR notifies IT before or the moment the termination meeting begins.
Classic CISSP trap: "immediately upon decision" for leavers. The moment HR decides to terminate = the moment IT disables access. Not "after the meeting," not "before they leave," not "next business day."
Which statement BEST reflects the CISSP-recommended timing for disabling access when an employee's termination is announced?
Correct Answer: C
CISSP consistently tests this: for involuntary terminations (firing, layoffs), access must be disabled immediately — the moment the decision is made. Waiting until the end of the day (A), 24 hours (B), or after handover (D) all leave dangerous windows of exposure. For voluntary resignations (employee gives notice), access may be managed differently — accounts may remain active during the notice period under increased monitoring, with access scoped down and fully revoked on the final day. The scenario of involuntary termination requires immediate, concurrent action between HR and IT.
Involuntary termination = immediate disable. Voluntary resignation = managed wind-down with monitoring. Know the difference for exam scenarios.
An employee transfers from the Finance department to the Engineering department. The IT provisioning system correctly adds Engineering access but does NOT automatically remove Finance access. Which JML lifecycle event is this, and what principle is violated?
Correct Answer: C
When an employee changes roles (Mover), the correct process is: provision new access AND de-provision old access simultaneously. Failing to remove Finance access when an engineer no longer needs it violates: (1) Least privilege — they have more access than their current role requires; (2) Need-to-know — they can access Finance data they no longer have a business need to see. This is the root cause of privilege creep. Many organizations fail at the "remove old access" step because IT only gets notified to add new access, not to remove old access. The access review process should catch this.
JML provisioning must always be bidirectional for Movers: ADD new role access AND REMOVE old role access. Systems that only provision additions create privilege creep structurally.
What are the default TCP ports for LDAP and LDAPS, and why should LDAPS be preferred over LDAP?
Correct Answer: A
LDAP (Lightweight Directory Access Protocol) uses TCP port 389 and transmits data in plaintext — including directory queries, user attributes, and authentication credentials (unless using SASL mechanisms). LDAPS is LDAP over TLS and uses TCP port 636, encrypting all communication. Organizations should use LDAPS (port 636) or LDAP with STARTTLS to prevent credential interception and directory data disclosure. Note: STARTTLS (D) upgrades an LDAP connection on port 389 to TLS — it is a valid alternative but carries more complexity than native LDAPS.
Memorize: LDAP = 389 plaintext. LDAPS = 636 encrypted. Same importance as knowing HTTP=80 vs HTTPS=443. Always use the encrypted version for directory authentication.
Which statement CORRECTLY differentiates RADIUS from TACACS+ regarding encryption and protocol characteristics?
Correct Answer: B
Key RADIUS vs. TACACS+ distinctions: RADIUS: UDP ports 1812/1813, encrypts only the password field (the rest of the packet is cleartext — username and attributes are visible), combines Authentication and Authorization in one response. TACACS+: TCP port 49, encrypts the entire packet body (not just password), and clearly separates Authentication, Authorization, and Accounting into independent transactions (allows more granular control). TACACS+ is generally preferred for network device administration (routers, switches) where command-level authorization logging is needed. RADIUS is preferred for network access control (VPN, 802.1X). RADIUS is an IETF standard; TACACS+ is Cisco-proprietary (D is reversed).
Key differences table: RADIUS = UDP + password-only encryption + combined Auth/Authz. TACACS+ = TCP + full packet encryption + separated AAA. This is heavily tested — memorize it.
During an access recertification campaign, who should be primarily responsible for certifying that each user's access is still appropriate?
Correct Answer: C
Access recertification requires a business decision: "Does this person STILL NEED this access for their current job?" That is a business question, not a technical one. The user's manager is in the best position to know the employee's current responsibilities and whether specific access is still justified. IT security administers the process (generates the reports, processes removals) but does not make the business judgment. Users certifying their own access (B) creates a conflict of interest — they will always say yes. HR knows role changes but not system-level access justifications.
Recertification governance: Business manager certifies (business decision). IT security facilitates and enforces. The data owner may also certify access to their specific data resources in larger enterprises.
When a new employee joins an organization (Joiner), what access provisioning approach best implements the principle of least privilege from day one?
Correct Answer: B
Role-based provisioning (assigning access based on a defined role profile for the new employee's position) is the correct Joiner process. This implements least privilege from day one — the employee starts with only what their role requires. Option A (grant everything, revoke later) is the opposite of least privilege and the "later" revocation often never happens. Option C (copy from a peer) risks copying that peer's accumulated excess access (privilege creep). Option D (read-only everything) is impractical — an engineer needs write access to development systems from day one.
Joiner best practice: role profiles define the standard access set for each job function. New employees are provisioned from the profile, not individually — this is scalable and auditable.
An organization deploys 802.1X network access control on all wired and wireless ports, using RADIUS for authentication. What does this control prevent?
Correct Answer: B
802.1X is a port-based network access control (PNAC) protocol. When a device connects to a switch port or wireless access point, it must authenticate to the RADIUS server before being granted network access. An unauthenticated device (e.g., an attacker plugging into a vacant ethernet port) is placed in a quarantine VLAN or blocked entirely. 802.1X does not encrypt traffic (C) — that requires additional protocols like MACsec or WPA2/3. It does not filter web content (A) or inspect for malware (D).
802.1X = network-level authentication before network access. RADIUS handles the backend authentication. Together they implement "authenticated network access" — you must prove identity before the switch lets any traffic through.
An audit finds 47 active user accounts in a system for employees who left the company 6-18 months ago. What is the MOST serious risk these "orphaned accounts" pose?
Correct Answer: B
Orphaned accounts (accounts of former employees that were not disabled/deleted) are a critical IAM vulnerability. They represent: (1) Active access pathways for former employees who may still know the passwords; (2) Easy targets for credential stuffing attacks — breached passwords from other sites may work since no one is monitoring the account; (3) Attack persistence — malware can maintain access via orphaned accounts without triggering alerts. They are also an easy finding for attackers doing OSINT (former employees listed on LinkedIn = potential orphaned account targets). Immediate disable on termination prevents this.
Orphaned account = unlocked door to an empty house. The former occupant (and anyone who got a copy of their key) can walk in anytime. Disable immediately on separation.
A network operations team uses TACACS+ for device administration. A senior engineer argues that TACACS+ is superior to RADIUS for this use case because of command authorization. What does TACACS+ command authorization enable that RADIUS cannot provide natively?
Correct Answer: B
TACACS+ separates Authentication, Authorization, and Accounting (AAA) into independent transactions. This separation enables per-command authorization: when an admin logs into a Cisco router and types a command, the router can query the TACACS+ server with "User X wants to run command Y — is this permitted?" The TACACS+ server evaluates the command against the user's authorization policy and responds permit or deny. This enables very fine-grained control over what privileged operations each administrator can perform. RADIUS combines auth and authorization in a single access-accept/reject, making per-command authorization impractical without additional architecture.
TACACS+ for network device administration: think command-level authorization. RADIUS for network access control: think "can this device connect to the network?" Different tools for different purposes.
A FinTech Company X data engineer is granted access to the production analytics database for a 3-month data quality project. The project ends but the access is not revoked. Six months later, the engineer is found to have been querying production customer PII data for non-work purposes. Which IAM control failure is most directly responsible?
Correct Answer: B
Two IAM failures occurred: (1) Project-specific access should have been time-bounded — access provisioned for a defined project should automatically expire when the project ends. (2) Access recertification failed to catch that the access was no longer justified. The initial provisioning for a specific project with a defined end date is a clear candidate for time-limited access (Vault-style leases for long-term projects). If not time-limited, quarterly access reviews should have detected that the data quality project ended and revoked the access. Least privilege (A) may have been satisfied initially — the problem is the ONGOING access after need ended.
Project-based access should always be time-bounded. Tools: PAM JIT access with defined expiry, ticket-based temporary access workflows, or recertification tied to project closure milestones.
An organization uses SCIM (System for Cross-domain Identity Management) to automatically synchronize user accounts from their HR system to cloud applications. What is the PRIMARY security benefit of automating provisioning through SCIM?
Correct Answer: B
SCIM (RFC 7642-7644) provides a standardized API for automating identity lifecycle management across systems. The primary security benefit is eliminating manual provisioning delays: (1) Joiner: when HR onboards a new employee, SCIM automatically creates accounts in all connected systems on their start date — no waiting for IT tickets; (2) Leaver: when HR terminates an employee, SCIM automatically disables accounts across ALL connected cloud apps within seconds — no missed applications. Manual processes are slow, inconsistent, and error-prone. SCIM automation removes the human delay that creates orphaned access windows.
SCIM = IAM automation protocol. Think of it as the plumbing that connects HR (the source of truth for identity) to all downstream systems. IGA (Identity Governance and Administration) tools use SCIM as the integration standard.
Which transport protocol does RADIUS use, and what security implication does this have for packet delivery?
Correct Answer: B
RADIUS uses UDP (User Datagram Protocol) on ports 1812 (authentication/authorization) and 1813 (accounting). UDP is connectionless — no guaranteed delivery, no retransmission by the transport layer. RADIUS implements its own retransmission and timeout logic at the application layer. TACACS+ uses TCP port 49 — connection-oriented, guaranteed delivery. In networks with high latency or packet loss, RADIUS UDP behavior can cause authentication failures; TACACS+ TCP handles this more reliably. This is one operational reason some teams prefer TACACS+ for reliable device administration authentication.
RADIUS = UDP (1812/1813). TACACS+ = TCP (49). The protocol transport affects reliability and NAT traversal behavior — TCP is more reliable in complex network environments.
A security policy requires that any account that has not been used for 90 consecutive days should be automatically disabled. Which principle does this policy implement?
Correct Answer: C
Disabling dormant (inactive) accounts is a lifecycle management control that reduces the attack surface. An account unused for 90 days likely belongs to: a former employee whose offboarding was missed, a service account no longer in use, a contractor whose engagement ended, or a user on extended leave. All are unnecessary access pathways. Attackers target dormant accounts because they attract less monitoring attention. Disabling them proactively is consistent with least privilege (inactive = not needed = disable) and with NIST 800-53 AC-2 account management controls.
Dormant account policy: automate detection and disable; require explicit reactivation. CIS Controls and NIST recommend 30-90 day dormancy thresholds depending on the environment's risk level.
An application authenticates users by binding to an LDAP directory using the user's credentials in cleartext over port 389. A security engineer performs a network capture during a login and can read the username and password in the captured packets. What remediation is REQUIRED?
Correct Answer: B
LDAP on port 389 without TLS transmits bind (authentication) operations in cleartext — anyone with network access (e.g., ARP spoofing on the same LAN) can capture and read usernames and passwords. The correct remediation is to implement LDAPS (LDAP over TLS, port 636) which encrypts all communication including authentication. Alternatively, STARTTLS upgrades an existing LDAP connection to TLS. Port 443/HTTPS (A) is for HTTP traffic, not LDAP. Switching to database auth (C) solves the problem differently but may not be feasible. VPN (D) is a compensating control, not a fix for the underlying protocol vulnerability.
Never allow LDAP bind operations over port 389 in production — require LDAPS or STARTTLS. Active Directory Certificate Services can be used to deploy LDAPS certificates in Windows environments.
Which of the following BEST describes the function of Identity Governance and Administration (IGA) tools in an enterprise IAM program?
Correct Answer: B
IGA tools (SailPoint, Saviynt, Oracle Identity Governance) provide the governance layer on top of identity infrastructure: (1) Access request and approval workflows, (2) Role management and role mining (discovering natural RBAC groupings from existing access patterns), (3) Automated provisioning/deprovisioning via SCIM/connectors, (4) Access certification campaigns (who certifies access), (5) Separation of duties policy enforcement, (6) Reporting for audit and compliance. IGA complements directory services (A) — they work with AD/LDAP rather than replacing them. IGA is primarily for employees/internal users, while CIAM (C) is for external customers.
IGA = the control plane for enterprise identity governance. Think of it as the audit and compliance layer that sits above your identity providers and applications.
A service account (non-human identity) used by an application to connect to a database has been granted full database administrator rights "just in case." Which security principle is most severely violated?
Correct Answer: B
Granting "full DBA" rights to a service account that only needs to SELECT from specific tables and INSERT into specific tables is a severe least privilege violation. If the application is compromised (e.g., SQL injection), the attacker inherits the service account's permissions. With full DBA rights, this means complete database compromise — read all data, delete all data, create backdoor accounts. With properly scoped service account permissions, the attacker would only access what the application legitimately can. Service accounts should have the minimum permissions required for their specific function, no more.
Service account least privilege: enumerate what the application actually does (SELECT from X, INSERT to Y), grant exactly those permissions. "Just in case" is never a valid justification for excess privilege.
FinTech Company X is preparing for a SOC 2 Type II audit. The auditor asks for evidence of access recertification for the past 12 months. Which artifact would BEST satisfy this requirement?
Correct Answer: B
SOC 2 CC6.3 requires evidence of access recertification — that the organization periodically reviews user access rights and removes inappropriate access. The evidence needed is a record of the review process, not just the current state. A simple user list (A) shows who has access NOW but not that reviews occurred. The auditor needs: (1) Completed review reports for each cycle (e.g., quarterly = 4 reports over 12 months), (2) Evidence of manager/certifier sign-off, (3) Evidence that identified excess access was actually removed (remediation tracking). Without recertification records, the control cannot be proven to be operating effectively.
Audit evidence = proof that the control operated during the period, not just that it exists. Recertification evidence: completed reports, certifier names, dates, and remediation tickets for any access removed.
A consultant's contract expires on March 31st. Their system access is not revoked until April 15th, 15 days after contract expiration. Which control failure best explains how this happened?
Correct Answer: B
For contractors and temporary workers, the ideal control is automatic account expiry tied to the contract end date — the account should be configured with an expiry date when created, or the IGA system should be integrated with the vendor management/contract management system to trigger deprovisioning automatically on contract expiry. Relying on manual processes (IT running reports, consultants self-reporting, annual reviews) creates the gaps that result in 15-day overrun windows. The root cause is the lack of automated integration between contract lifecycle and identity lifecycle management.
Contractor accounts should be created with an expiry date matching contract end. If the contract is extended, the expiry date is updated. No extension = automatic disable. Automation eliminates the reliance on humans remembering to act.
NIST SP 800-63A defines three Identity Assurance Levels (IAL1, IAL2, IAL3). A digital lending platform verifying customer identity for high-value loan applications requires strong identity assurance. Which IAL is MOST appropriate, and what does it require?
Correct Answer: B
NIST 800-63A Identity Assurance Levels: IAL1 = self-asserted only, no validation (acceptable for low-risk services). IAL2 = remote or in-person proofing with verification of real-world identity via government-issued documents + liveness checks; suitable for moderate-to-high value transactions including financial services. IAL3 = in-person or supervised remote proofing with biometric binding to a physical credential; required for highest assurance (e.g., federal government PIV cards). Most digital lending platforms (including Platform C) operate at IAL2 — remote eKYC with government ID + liveness detection. IAL3 would require in-person visits, which is impractical for digital-first lending.
FinTech Company X context: Platform C's eKYC (government ID + selfie liveness check) maps to IAL2. This provides sufficient assurance for micro-lending. Higher loan amounts may justify IAL3 in-person verification for select cases.
An attacker attempts to brute-force Platform C customer accounts by trying common passwords. The system has no account lockout policy. Which controls would MOST effectively mitigate this attack without locking out legitimate users?
Correct Answer: B
The challenge: traditional account lockout (C) stops brute force but enables a new attack — an attacker who knows valid usernames can lock out ALL users (DoS attack). Modern approach for mobile apps: (1) Rate limiting per IP and per account (slow down but don't lock); (2) Progressive delays (exponential backoff after each failure); (3) CAPTCHA after N failures; (4) Device fingerprinting to block suspicious patterns; (5) Credential stuffing detection to identify accounts where the exact breached password was tried. This stops automated attacks without enabling DoS via lockout. Password length (A) helps long-term but doesn't stop stuffing attacks using previously breached passwords.
FinTech Company X context: Platform C's OTP-based auth reduces traditional password brute force risk, but credential stuffing (trying known breached phone+password combos) still applies to any identifier-based authentication flow.
Which combination of IAM controls BEST supports non-repudiation for high-value transactions?
Correct Answer: B
Non-repudiation requires that a user cannot deny having performed an action. This requires: (1) Individual named accounts — no shared accounts (shared accounts make attribution impossible); (2) Strong MFA — strong confidence that the authenticated user is who they claim to be; (3) Tamper-evident audit logs — logs that cannot be altered after the fact (integrity); (4) Digital signatures — mathematical proof that the specific authenticated entity performed the action. Group accounts (A) and shared service accounts (D) fundamentally destroy non-repudiation. RBAC (C) controls access but does not address the attribution chain needed for non-repudiation.
Non-repudiation chain: strong authentication (who) + individual accounts (which person) + tamper-evident logs (what they did, when) + digital signatures (mathematical proof). All four are required.
Platform C's customer-facing identity system (CIAM) serves 5 million borrowers across 3 countries. Which identity management concern is MOST unique to CIAM compared to enterprise IAM (managing internal employees)?
Correct Answer: B
CIAM and enterprise IAM have fundamentally different scale and trust models: Enterprise IAM — known, vetted employees; HR-driven provisioning; high trust; internal systems; standardized environments. CIAM — millions of unknown external users; self-registration; low initial trust; public internet; diverse devices; must minimize UX friction (abandonment hurts revenue) while preventing fraud; must comply with local regulations (BSP in Philippines, OJK in Indonesia, etc.) for eKYC, data residency, and customer privacy. The multi-jurisdiction regulatory complexity and the scale/UX balance are uniquely CIAM challenges. Both types require MFA (A is wrong), both require audit logging (D is wrong).
CIAM design tension: security friction vs. conversion rate. Each step-up authentication step loses users. IAM architects must calibrate risk-based authentication to apply friction only when risk justifies it.
An organization with 10,000 employees wants to implement RBAC but has no existing role definitions. Which approach should be used to define roles, and what is this process called?
Correct Answer: C
Role mining (also called role discovery) uses data analysis to examine existing user-permission assignments and identify natural clusters where many users have the same combination of permissions — these clusters are candidates for formal roles. Bottom-up RBAC derives roles from existing access reality. Top-down RBAC (the alternative) starts with organizational charts and job descriptions to define roles. Hybrid approaches use both. Role mining is the practical starting point for large organizations because it deals with reality as-is, rather than an ideal theory. After mining, role engineers rationalize and formalize the discovered roles, eliminating excessive outliers.
Role mining is an IGA tool capability — tools like SailPoint, Saviynt use clustering algorithms to identify role candidates from access data. After mining, role owners validate and approve the proposed roles.
FinTech Company X is designing a comprehensive IAM architecture for Platform C that must satisfy: (1) eKYC identity proofing for new borrowers, (2) OTP-based authentication with anti-enumeration, (3) Per-lender JWT scoping (ABAC), (4) Vault-managed service credentials with auto-rotation, (5) RBAC in the admin portal with quarterly access reviews, and (6) Immediate account disable upon borrower fraud detection. Which IAM principle BEST describes the overarching design philosophy of this architecture?
Correct Answer: A
The architecture described applies IAM defense in depth — multiple independent, complementary controls operating at different layers: Identity proofing (who are you before you get an account?), Authentication (prove identity at login), Authorization (ABAC scoping — what can you do after authentication?), Secrets management (Vault — protecting non-human credentials), Governance (RBAC + quarterly reviews — ensuring access remains appropriate over time), and Lifecycle management (immediate disable on fraud — the Leaver process for high-risk events). Each layer addresses different threat vectors. If OTP is bypassed, per-lender scoping still limits blast radius. If an access token is stolen, Vault auto-rotation limits credential exposure. Multiple layers = defense in depth.
FinTech Company X context: This six-component IAM architecture represents the full IAM program for Platform C — from the moment a new customer registers (identity proofing) through their active use (authentication + authorization) to eventual account closure or fraud response (lifecycle). Defense in depth is the meta-principle that makes this holistic program resilient.
Domain 5 Complete!
100 questions · Authentication · SSO & Federation · Access Control Models · PAM · Identity Lifecycle