Study Smarter for the ISC2 SSCP Exam with Our Free and Reliable SSCP Exam Questions โ Updated for 2025.
At Cert Empire, we are focused on delivering the most accurate and up-to-date exam questions for students preparing for the SSCP Exam. To make preparation easier, weโve made parts of our ISC2 SSCP exam resources free for everyone. You can practice as much as you like with IC22 SSCP Practice Test.
SSCP Updated
A one-way hash is a cryptographic function that meets all the criteria described. It takes an input of arbitrary length (a string of characters) and produces a fixed-length string, known as a hash value or message digest. The core properties of a cryptographic hash function are that it is deterministic (the same input always produces the same output), fast to compute, and, crucially, "one-way." This one-way property, also known as preimage resistance, makes it computationally infeasible to determine the original input string from its hash value, meaning the transformation cannot be reversed.
B. DES: This is a symmetric-key encryption algorithm. It is a two-way, reversible process used for confidentiality, not a one-way transformation.
C. Transposition: A classical encryption cipher that rearranges the order of characters. It is reversible and does not produce a fixed-length output.
D. Substitution: A classical encryption cipher that replaces characters with other characters. It is also reversible and does not produce a fixed-length output.
1. National Institute of Standards and Technology (NIST). (2015). FIPS PUB 180-4
Secure Hash Standard (SHS). Section 1
"Introduction
" states
"A hash algorithm is used to compute a condensed representation of a message or a data file... For a given algorithm
the message digests are of a fixed length..."
2. Katz
J.
& Lindell
Y. (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press. In Chapter 5
Section 5.1
a hash function is defined as a function that maps arbitrary-length strings to a fixed-length output. The property of being "one-way" is formally defined as preimage resistance.
3. Stallings
W. (2017). Cryptography and Network Security: Principles and Practice (7th ed.). Pearson. Chapter 11
Section 11.1
"Secure Hash Algorithms
" describes the fundamental requirements for a cryptographic hash function
including the one-way property (preimage resistance) and the production of a fixed-length hash value.
A compromised hypervisor represents a complete failure of the virtualization security boundary, granting the attacker potential control over all hosted virtual machines (VMs) and a powerful pivot point for lateral movement. The most critical immediate step, according to established incident response principles, is containment. Disconnecting the hypervisor from the network immediately severs the attacker's command and control (C2) channel and prevents them from exfiltrating data or attacking other systems on the network. This action isolates the threat, limiting the scope of the damage, which is the primary objective in the initial phase of handling a critical security incident.
A. Increase the monitoring frequency of virtual machine logs.
This is a detective, not a mitigative, action. It does not stop the active attack, and the attacker could potentially alter the logs from the compromised hypervisor.
B. Restart all virtual machines hosted by the hypervisor.
This is ineffective because the compromise is at the hypervisor level. The attacker would retain control of the hypervisor and could simply re-compromise the VMs upon restart.
D. Apply the latest hypervisor patches and updates.
Patching is a preventative and recovery measure, not an immediate containment step for an active breach. It will not eject an attacker who is already in the system.
1. NIST Special Publication 800-61 Rev. 2
Computer Security Incident Handling Guide. Section 3.3.2
"Containment
" states that a major decision is "how much to disconnect the affected systems from the network." For a severe compromise
complete disconnection is a primary strategy to prevent the attacker from causing further damage.
2. NIST Special Publication 800-125
Guide to Security for Full Virtualization Technologies. Section 5.3
"Hypervisor
" describes the hypervisor as the most critical component. A compromise at this level is catastrophic
implicitly requiring immediate and decisive containment actions like network isolation to prevent the compromise from spreading.
3. Souppaya
M.
& Scarfone
K. (2011). NIST Special Publication 800-146
Cloud Computing Synopsis and Recommendations. Section 5.2.2
"Network Security
" discusses the risk of a compromised hypervisor allowing an attacker to "gain access to the underlying physical infrastructure and from there to other tenants." This highlights the urgency of preventing lateral movement
which is best achieved by network isolation.
4. Garfinkel
T.
& Rosenblum
M. (2005). When Virtual is Harder than Real: Security Challenges in Virtual Machine Based Computing. Proceedings of the 10th conference on Hot Topics in Operating Systems
Vol. 10. This foundational academic paper discusses the severe implications of a Virtual Machine Monitor (hypervisor) compromise
noting that it "subverts the security of every guest OS running above it
" reinforcing the need for immediate
drastic containment measures.
The primary role of a smartcard within a Public Key Infrastructure (PKI) is to provide a secure, hardware-based environment for the user's private key. Smartcards are designed to be tamper-resistant, preventing unauthorized physical or logical access to the key. Critically, the private key is generated on and never leaves the card. All cryptographic operations requiring the private key, such as digital signing or decrypting a session key, are performed by the processor on the card itself. This protects the key from being compromised by malware on the host computer and provides strong non-repudiation, as the user must physically possess the card and typically provide a PIN to authorize its use.
A. Key renewal is a PKI management process; the smartcard is a secure endpoint for this process, not the primary enabler of renewal itself.
B. Public certificates are distributed by Certificate Authorities (CAs) and directories, not primarily by users exchanging smartcards.
C. While smartcards perform cryptographic operations, they are not designed for high-speed bulk data encryption; dedicated Hardware Security Modules (HSMs) serve that role.
1. National Institute of Standards and Technology (NIST) FIPS 201-3
Personal Identity Verification (PIV) of Federal Employees and Contractors
January 2022. Section 6.2
"PIV Cryptographic Keys
" states
"Private keys are generated on the PIV card and are not exportable." This highlights the card's role as a secure
non-exportable container for private keys.
2. National Institute of Standards and Technology (NIST) SP 800-73-4
Interfaces for Personal Identity Verification - Part 1: PIV Card Application Namespace
Data Model
and Representation
January 2015. Section 3.1.1
"PIV Card
" specifies
"The PIV Card is used to store PIV identity credentials and to perform cryptographic computations." This directly supports the role of secure storage and application of keys.
3. Stallings
W.
& Brown
L. (2018). Computer Security: Principles and Practice (4th ed.). Pearson. Chapter 22
"Public-Key Cryptography and Message Authentication
" discusses the critical need to protect private keys
stating that hardware tokens like smartcards "provide tamper-resistant storage of private keys."
4. Microsoft Documentation
Smart Card Architecture. The documentation explains that a smart card's Cryptographic Service Provider (CSP) or Key Storage Provider (KSP) ensures that "authentication and other private key operations are performed on the smart card and not on the host computer
" reinforcing the principle of secure application and storage.
The Digital Signature Standard (DSS), specified in FIPS 186-4, defines algorithms for generating and verifying digital signatures. The primary security services provided by a digital signature are authentication (verifying the sender's identity), data integrity (ensuring the message has not been altered), and non-repudiation (preventing the sender from denying the message). DSS is not designed to provide confidentiality. While the underlying algorithms like RSA can be used for encryption, the standard itself is exclusively for creating signatures, not for encrypting data to keep it secret.
B. Integrity: DSS provides integrity by using a secure hash function on the message before signing. Any change to the message results in a different hash, causing signature verification to fail.
C. Digital signature: This is the core function of the standard. DSS explicitly defines the methods and algorithms (DSA, RSA, ECDSA) for creating and verifying digital signatures.
D. Authentication: DSS authenticates the origin of a message. Since the signature is created with the signer's private key, successful verification with the public key proves the message came from the claimed sender.
1. National Institute of Standards and Technology (NIST). (2013
July). FIPS PUB 186-4: Digital Signature Standard (DSS). U.S. Department of Commerce. In Section 1
"Specification
" the document states its purpose is for applications requiring a digital signature to "detect unauthorized modifications to data and to authenticate the identity of the signatory." It makes no mention of providing encryption or confidentiality. (Page 1
Section 1).
2. Purdue University. (n.d.). CS 555: Introduction to Cryptography - Lecture 20: Digital Signatures. "The main goal of digital signatures is to provide authenticity
including data integrity and origin authentication. It also provides non-repudiation... Digital signatures do not provide confidentiality." (Slide 3).
3. Katz
J.
& Lindell
Y. (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press. In Chapter 12
the text clearly distinguishes the security goals of digital signatures (integrity and authentication) from those of encryption schemes (confidentiality). DSS is presented as a standard for the former. (Chapter 12
Section 12.1
"Definition of Secure Signatures").
Statistical multiplexing, also known as statistical time-division multiplexing (STDM), is a communication channel sharing method that allocates bandwidth dynamically. Unlike traditional time-division multiplexing (TDM) which uses fixed time slots, statistical multiplexing allocates time slots on an as-needed basis to users who have data to transmit. This approach is highly efficient for bursty data traffic from variable bit-rate sources, as it leverages the statistical probability that not all users will transmit simultaneously, thereby accommodating more users on a shared channel. This directly matches the question's description of dynamic allocation for an arbitrary number of variable bit-rate streams.
A. Time-division multiplexing: This method allocates a fixed, pre-determined time slot to each channel in a round-robin fashion, regardless of whether the channel has data to send. It is a static, not dynamic, allocation method.
B. Asynchronous time-division multiplexing: While often used synonymously with statistical multiplexing, "statistical multiplexing" is the more precise and fundamental term describing the method based on traffic statistics. ATDM is a specific implementation of this principle.
D. Frequency division multiplexing: This method divides the channel's bandwidth into distinct, non-overlapping frequency bands, with each channel assigned a dedicated band. This allocation is static and based on frequency, not time.
1. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson.
Page 33
Section 1.3.2: "Packet switching also makes use of statistical multiplexing... In a TDM link
each host gets a dedicated rate of R/N bps during every time frame... With statistical multiplexing... there is no a priori reservation of a linkโs capacity." This source distinguishes statistical multiplexing's dynamic
on-demand nature from TDM's fixed allocation.
2. Tanenbaum
A. S.
& Wetherall
D. J. (2011). Computer Networks (5th ed.). Prentice Hall.
Page 135
Section 2.5.2: "A variation of TDM is statistical TDM
in which slots are allocated to lines dynamically... The statistical TDM scans the input lines and collects data until a frame is full
and then it sends the frame." This directly describes the dynamic allocation method for channels with data.
3. Stallings
W. (2014). Data and Computer Communications (10th ed.). Pearson.
Page 243
Section 8.2: "In statistical time-division multiplexing
time slots are allocated dynamically on the basis of need... The statistical multiplexer can absorb peak rates of a number of the attached devices
providing a better response time than a synchronous TDM system." This confirms the dynamic
on-demand allocation for variable traffic.
Reverse Address Resolution Protocol (RARP) is a network protocol used by a client device on a Local Area Network (LAN) to request its Internet Protocol (IP) address from a server. The client broadcasts a RARP request packet, which contains its own unique Media Access Control (MAC) address. A designated RARP server on the network receives this request, looks up the MAC address in its configuration table, and replies with the corresponding IP address. This mechanism was historically used by diskless workstations or other devices at boot time to discover their IP address when they only knew their hardware address.
B. Address resolution protocol (ARP): ARP performs the opposite function; it resolves a known IP address to an unknown MAC address (IP -> MAC).
C. Data link layer: This is Layer 2 of the OSI model. It is a conceptual layer, not a specific protocol that resolves a MAC address to an IP address.
D. Network address translation (NAT): NAT is a method for remapping IP addresses, typically between a private LAN and a public network (like the Internet), not for local address discovery.
1. Finlayson
R.
Mann
T.
Mogul
J.
& Theimer
M. (1984). RFC 903: A Reverse Address Resolution Protocol. IETF. The abstract states
"This RFC describes a protocol for allowing a host to discover its Internet address when it knows only its hardware address."
2. Comer
D. E. (2018). Internetworking with TCP/IP Volume 1: Principles
Protocols
and Architecture (6th ed.). Pearson. In Chapter 6
"Mapping Internet Addresses To Physical Addresses (ARP)
" the text contrasts ARP with RARP
explicitly stating RARP's purpose: "A host uses RARP to find its IP address when it knows its hardware address" (Section 6.10
RARP).
3. Tanenbaum
A. S.
& Wetherall
D. J. (2011). Computer Networks (5th ed.). Prentice Hall. In Chapter 5
"The Network Layer
" Section 5.6.3
the text describes ARP and its inverse
RARP
noting that RARP allows a host to broadcast its Ethernet address and ask for someone to tell it the corresponding IP address.
The foundational step in any information classification program is to establish the framework upon which all other activities will be based. This involves specifying the criteria that define the classification levels (e.g., Public, Internal, Confidential, Restricted). These criteria determine how data is categorized based on its sensitivity, criticality, and impact if disclosed. All subsequent steps, such as assigning security controls, appointing custodians, and establishing review procedures, are dependent on this initial, fundamental definition. Without clear criteria, the classification process would be arbitrary, inconsistent, and ultimately ineffective.
A. Establishing review procedures is a governance and maintenance step that occurs after the initial classification program has been defined and implemented.
B. Specifying security controls for each level can only be done logically after the classification levels and their corresponding criteria have been established.
C. Identifying a data custodian is an operational step to assign responsibility, which follows the creation of the classification framework that the custodian will manage.
---
1. Official (ISC)ยฒ Guide to the SSCP CBK
5th Edition. Chapter 2
"Security Operations and Administration
" in the section "Implement and Support Data-Labeling Policies
" explains that the creation of a data classification policy begins with defining the objectives and criteria for classification before assigning roles or controls.
2. NIST Special Publication 800-60 Vol. 1 Rev. 1
Guide for Mapping Types of Information and Information Systems to Security Categories. Section 2.1
"The Security Categorization Process
" outlines the initial step as identifying the types of information to be protected. This act of identification and understanding is integral to defining the criteria for how they will be classified based on impact.
3. University of California
Berkeley
Data Classification Standard. This official university document exemplifies the correct process. The standard begins by defining the "Protection Levels" (PLs) and "Availability Levels" (ALs)
which constitute the criteria for classification
before it proceeds to detail any roles
responsibilities
or specific controls. (See Section III
"UC Berkeley Data Classification
" and Section IV
"UC Berkeley Data Availability Classification").
A full mesh topology offers the highest availability and fault tolerance. In this configuration, every node is directly connected to every other node in the network. This design creates the maximum number of redundant paths for data to travel between any two points. If a link or a node fails, traffic can be immediately rerouted through numerous alternative paths, ensuring that network communication remains uninterrupted. While costly and complex to implement, its inherent redundancy makes it the most resilient and available LAN topology.
A. Bus topology: This topology has a single point of failure; a break in the shared central cable will cause the entire network to fail.
B. Tree topology: A failure of a central hub or the main backbone cable can disconnect entire segments or all of the network.
D. Partial mesh topology: It provides redundancy but less than a full mesh, as not all nodes are connected to each other, creating fewer alternative paths.
1. Stallings
W. (2016). Data and Computer Communications (10th ed.). Pearson. In Chapter 11.2
"Topologies
" it is stated
"In a mesh topology
each station is connected to every other station... The primary advantage of the mesh topology is that it is very reliable or robust." The text implicitly supports that a full mesh is more reliable than a partial mesh.
2. Kurose
J. F.
& Ross
K. W. (2021). Computer Networking: A Top-Down Approach (8th ed.). Pearson. Chapter 1
Section 1.3
"The Network Edge
" discusses various access network types. The principles described illustrate that direct
multiple paths (a characteristic of full mesh) increase fault tolerance
a key component of availability.
3. Tanenbaum
A. S.
& Wetherall
D. J. (2011). Computer Networks (5th ed.). Prentice Hall. Chapter 1
Section 1.3.2
"Local Area Networks
" describes mesh networks as having high reliability due to the existence of multiple paths between nodes. It notes
"If one link becomes unusable
the network can often find a second path and work around it."
The RSA (Rivest-Shamir-Adleman) algorithm is a widely used asymmetric cryptosystem. Its security is fundamentally based on the computational difficulty of the integer factorization problem. The public key consists of a modulus n (the product of two large, secret prime numbers) and a public exponent e. To derive the corresponding private key, an attacker would need to determine the original prime factors of n. For sufficiently large numbers, factoring n is computationally infeasible with current technology, which ensures the security of the private key.
A. El Gamal: Its security is based on the difficulty of solving the Discrete Logarithm Problem (DLP) over a finite field.
B. Elliptic Curve Cryptosystems (ECCs): Security relies on the Elliptic Curve Discrete Logarithm Problem (ECDLP), a more complex variant of the DLP.
D. International Data Encryption Algorithm (IDEA): This is a symmetric-key block cipher and does not use the principles of asymmetric cryptography like factoring or discrete logarithms.
1. Paar
C.
& Pelzl
J. (2010). Understanding Cryptography: A Textbook for Students and Practitioners. Springer. In Chapter 6
Section 6.2
"The RSA Cryptosystem
" it is stated: "The security of RSA relies on the fact that it is difficult to factor large integers." (p. 161).
2. Katz
J.
& Lindell
Y. (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press. Chapter 11
Section 11.3
"The RSA Assumption
" directly connects the security of the RSA cryptosystem to the hardness of the factoring problem. (p. 378).
3. National Institute of Standards and Technology (NIST). (2013). FIPS PUB 186-4: Digital Signature Standard (DSS). Section 5.1.1
"RSA Key Pair Generation
" details the process where the modulus n is the product of two secret primes
p and q
establishing that the security relies on the secrecy of these factors. (DOI: https://doi.org/10.6028/NIST.FIPS.186-4).
4. Boneh
D. (1999). Twenty Years of Attacks on the RSA Cryptosystem. Notices of the American Mathematical Society
46(2)
203-213. The paper's introduction states
"The security of the RSA system is based on the assumption that factoring a large number is difficult." (p. 203).
The Data Encryption Standard (DES) algorithm specifies a key of 64 bits in length. However, within this 64-bit block, every eighth bit (bits 8, 16, 24, 32, 40, 48, 56, and 64) is designated as a parity bit for error detection. These parity bits are discarded before the key-scheduling process begins. Consequently, only 56 of the 64 bits are actually used to generate the subkeys for the encryption rounds. This makes the effective key size 56 bits, which defines the algorithm's cryptographic strength against brute-force attacks.
B. 64 bits: This is the nominal key size, including the 8 parity bits, not the effective key size used in the cryptographic operations.
C. 128 bits: This is a common key size for modern symmetric algorithms like the Advanced Encryption Standard (AES), not for the legacy DES algorithm.
D. 1024 bits: This key length is characteristic of asymmetric cryptographic algorithms, such as RSA, not symmetric block ciphers like DES.
1. National Institute of Standards and Technology (NIST). (1999). FIPS PUB 46-3
Data Encryption Standard (DES). U.S. Department of Commerce. In Section 3
"THE ALGORITHM
" it states
"The 64 bits of the key are denoted by K1
K2
...
K64. The bits K8
K16
...
K64 are for error detection... The 56 bits used in the algorithm are selected from the 64-bit key." (Page 4).
2. Boneh
D. (n.d.). CS255 Introduction to Cryptography
Lecture 5: DES. Stanford University. The lecture notes state
"DES uses a 64-bit key
but 8 of these bits are parity bits. So the effective key length is 56 bits." (Slide 10
"DES: The Data Encryption Standard").
3. Katz
J.
& Lindell
Y. (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press. In Chapter 6
"The Data Encryption Standard (DES)
" Section 6.2
"A High-Level Description of DES
" it is explained that the initial 64-bit key is subjected to a permutation (PC-1) that discards the parity bits
resulting in a 56-bit key for the key-scheduling algorithm. (Page 178).
The MD5 (Message-Digest Algorithm 5) is a cryptographic hash function designed to produce a 128-bit hash value, also known as a message digest. Regardless of the size of the input data, the MD5 algorithm processes it and generates a fixed-size output of 128 bits. This output is typically represented as a 32-digit hexadecimal number. Although MD5 is now considered cryptographically broken and unsuitable for security applications like digital signatures, its output size remains a fundamental characteristic.
B. 160 bits: This is the output size of the Secure Hash Algorithm 1 (SHA-1), a different and also deprecated hashing algorithm.
C. 256 bits: This is the output size for the SHA-256 algorithm, which is part of the more secure SHA-2 family of hash functions.
D. 128 bytes: This is incorrect as it equates to 1024 bits (128 bytes 8 bits/byte), which is not the standard output size for MD5.
1. Rivest
R. (1992). The MD5 Message-Digest Algorithm. RFC 1321. Internet Engineering Task Force (IETF). In Section 1
"MD5 Algorithm Description
" it states
"The algorithm takes as input a message of arbitrary length and produces as output a 128-bit 'fingerprint' or 'message digest' of the input." Available at: https://doi.org/10.17487/RFC1321
2. National Institute of Standards and Technology (NIST). (2023). Computer Security Resource Center (CSRC) Glossary: Message Digest 5 (MD5). The definition explicitly states
"A hash algorithm that produces a 128-bit hash value." Available at: https://csrc.nist.gov/glossary/term/messagedigest5
3. Katz
J.
& Lindell
Y. (2020). Introduction to Modern Cryptography (3rd ed.). CRC Press. In Chapter 5
"Hash Functions and Applications
" the text describes MD5 as a function that "outputs a 128-bit digest." (Specific reference: Section 5.1.1
"Constructions of Hash Functions").
4. Rivest
R. (2014). Lecture 9: Hash Functions. MIT OpenCourseWare
6.857 Network and Computer Security
Fall 2014. The lecture notes specify the output sizes for various hash functions
listing MD5 with a 128-bit output. Available at: https://ocw.mit.edu/courses/6-857-network-and-computer-security-fall-2014/resources/mit6857f14lec9/
Wireless Transport Layer Security (WTLS) is the security layer of the Wireless Application Protocol (WAP) stack. It is specifically designed to provide security services for wireless environments, which are characterized by low bandwidth and high latency. WTLS ensures confidentiality through encryption, data integrity through message authentication codes (MACs), and authentication through digital certificates. It is functionally analogous to the Transport Layer Security (TLS) protocol used in the standard internet protocol suite but is optimized for constrained mobile devices and networks. Its primary goal is to secure the connection between a mobile client and a WAP gateway.
A. S-WAP is not a standard protocol within the WAP architecture; it is a distractor. Security in WAP is handled by a specific layer, not a generic "Secure-WAP" protocol.
C. WSP (Wireless Session Protocol) operates at the session layer, managing the establishment and termination of sessions. It does not provide cryptographic security services like encryption or integrity.
D. WDP (Wireless Datagram Protocol) is the transport layer of the WAP stack, analogous to UDP. It provides a datagram service but lacks any inherent security mechanisms.
1. Schulzrinne
H. (2002). WAP - Wireless Application Protocol. Columbia University
Department of Computer Science. CSEE 4119
Network Protocols and Applications. Slide 21 describes the WAP protocol stack
identifying WTLS as the security layer responsible for "authentication
privacy
integrity". Retrieved from https://www.cs.columbia.edu/~hgs/teaching/4119/f02/lect/wap.pdf
2. WAP Forum. (2001
April 6). Wireless Transport Layer Security Specification
Version 06-Apr-2001 (WAP-261-WTLS-20010406-a). Open Mobile Alliance. Section 5
"Goals of the WTLS Layer
" p. 13
states
"The WTLS protocol is intended to provide privacy
data integrity and authentication between two communicating applications."
3. Penttinen
J. T. (2015). The Telecommunications Handbook: Engineering Guidelines for Fixed
Mobile and Satellite Systems. John Wiley & Sons. Chapter 10.2.2
"The WAP Protocol Stack
" p. 418
explicitly states
"The Wireless Transport Layer Security (WTLS) provides security functions similar to TLS... It provides data integrity
privacy
and authentication..."
A standard 48-bit Media Access Control (MAC) address is divided into two equal parts. The first 24 bits (or 3 bytes) constitute the Organizationally Unique Identifier (OUI). The Institute of Electrical and Electronics Engineers (IEEE) Registration Authority assigns this unique 24-bit block to a specific hardware vendor. The vendor then assigns the remaining 24 bits to each network interface card (NIC) they produce, ensuring a globally unique hardware address for the device.
A. 6 bits: This is an incorrect length and is insufficient to uniquely identify the vast number of hardware manufacturers globally.
B. 12 bits: This represents only half of the actual OUI length and does not align with the IEEE standard.
C. 16 bits: This value does not correspond to the standard byte-based (3 bytes x 8 bits) structure of the OUI.
1. IEEE Standards Association. "IEEE SA - OUI FAQ." The IEEE Registration Authority explicitly states
"An OUI is a 24-bit globally unique assigned number referenced by various standards." This is the vendor-identifying portion of a MAC address. (Accessed from the official IEEE-SA website under Registration Authority FAQs).
2. Kurose
J. F.
& Ross
K. W. (2021). Computer Networking: A Top-Down Approach (8th ed.). Pearson. In Chapter 6
Section 6.2
"Link-Layer Addressing and ARP
" the text explains that a MAC address consists of 6 bytes
with the first 3 bytes (24 bits) identifying the manufacturer
a value purchased from the IEEE.
3. Stallings
W. (2017). Data and Computer Communications (10th ed.). Pearson. Chapter 15
"Local Area Network Overview
" describes the format of the 48-bit MAC address
specifying that the first 24 bits are the OUI that identifies the vendor of the network adapter.
4. Comer
D. E. (2018). Computer Networks and Internets (6th ed.). Pearson. In Chapter 13
"LAN Wiring
Physical Topology
and Interface Hardware
" it is detailed that the IEEE assigns a unique 24-bit prefix
known as the OUI
to each manufacturer for use in the MAC addresses of their products.
A message digest is generated by a cryptographic hash function that processes the entire input file or message, not just a specific portion. The statement that the digest is calculated using "at least 128 bytes of the file" is incorrect. This confuses the size of the input data with the size of the output digest. A fundamental property of a hash function is to take a variable-length input and produce a fixed-length output. The entire input must be processed to ensure that any change to the original file, no matter how small, results in a different message digest, which is essential for integrity verification.
A. The original file cannot be created from the message digest.
This is a correct statement describing the one-way (pre-image resistance) property, a fundamental security requirement for any cryptographic hash function.
B. Two different files should not have the same message digest.
This is a correct statement describing the collision resistance property. It should be computationally infeasible to find two different inputs that produce the same hash output.
D. Messages digests are usually of fixed size.
This is a correct statement. A defining characteristic of a hash function is that it produces a fixed-length output (e.g., 256 bits for SHA-256) regardless of the input's size.
1. National Institute of Standards and Technology (NIST). (2015). FIPS PUB 180-4
Secure Hash Standard (SHS).
Section 5.1
"Padding the Message": This section details the process of taking the message of length l bits and padding it so that it can be processed. This confirms that the entire message is used as input
directly refuting option C.
Section 4
"PROPERTIES OF SECURE HASH ALGORITHMS": This section states
"For a given algorithm
the hash function is a one-way function; it is computationally infeasible to find a message that corresponds to a given message digest." This supports the correctness of option A. It also describes the goal of collision resistance
supporting option B.
Section 1
"INTRODUCTION": This section states that the secure hash algorithms "produce a condensed representation of a message called a message digest" and that this digest is a "fixed-size bit string." This supports the correctness of option D.
2. Katz
J.
& Lindell
Y. (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press.
Chapter 5
Section 5.1
"Definitions": A hash function is formally defined as a function H that maps arbitrary-length inputs to a fixed-length output. This supports option D and refutes option C
which implies a partial or minimum input. The chapter further defines pre-image resistance (supporting A) and collision resistance (supporting B) as essential security properties.
3. Rivest
R. (1992). RFC 1321: The MD5 Message-Digest Algorithm. IETF.
Section 3
"MD5 Algorithm Description": The overview describes the algorithm operating on an "arbitrary-length message" to produce a "128-bit 'fingerprint' or 'message digest'." Step 1
"Append Padding Bits
" explicitly details how the original message is padded to a specific length for processing
confirming the entire message is the input. This supports options D and A
while refuting C.
A circuit-level proxy operates at the session layer (OSI Layer 5). It validates the TCP handshake and establishes a "circuit" for the session, then relays traffic between the two endpoints without inspecting the application-layer data payload. In contrast, an application-level proxy (Layer 7) acts as a full intermediary, terminating the client connection, deeply inspecting the application protocol's content (e.g., HTTP commands), and then initiating a separate connection to the server. This deep inspection requires significantly more computational resources, resulting in higher processing overhead and greater latency compared to the more lightweight circuit-level proxy.
B. more difficult to maintain.
Application-level proxies are more complex as they require specific proxy software and configuration for each application protocol, making them more difficult to maintain.
C. more secure.
Application-level proxies are more secure because they can inspect and filter application-layer content, preventing protocol-specific attacks that circuit-level proxies cannot see.
D. slower.
Due to its lower processing overhead from not inspecting application data, a circuit-level proxy is generally faster than an application-level proxy.
1. Stallings
W.
& Brown
L. (2018). Computer Security: Principles and Practice (4th ed.). Pearson. In Chapter 21.2
"Firewall Characteristics
" the text explains that application-level gateways (proxies) add more processing overhead than circuit-level gateways.
2. National Institute of Standards and Technology (NIST). (2009). Special Publication 800-41 Revision 1: Guidelines on Firewalls and Firewall Policy. In Section 3.2
"Firewall Technologies
" it is noted that application-layer proxies "can be a performance bottleneck" due to the overhead of examining and forwarding all packets. In contrast
circuit-level proxies are described as simply relaying traffic without content examination
implying lower overhead.
3. Kurose
J. F.
& Ross
K. W. (2021). Computer Networking: A Top-Down Approach (8th ed.). Pearson. In Chapter 8
Section 8.7.2
"Firewalls
" the text contrasts application gateways with packet filters
noting that the deep inspection performed by application gateways "incurs a performance penalty." This principle directly applies to the comparison with circuit-level proxies
which perform less inspection and are therefore faster.
The Diffie-Hellman (DH) algorithm is a foundational cryptographic protocol used for key exchange or key agreement. Its primary function is to allow two parties, with no prior shared secret, to jointly establish a shared secret key over an insecure communication channel. This generated key can then be used for a symmetric encryption algorithm (like AES) to secure subsequent communications. The DH protocol itself does not perform encryption, integrity checks, or provide non-repudiation; its sole purpose is the secure establishment of a shared secret.
A. Confidentiality: DH enables confidentiality by creating a key for symmetric ciphers, but it does not provide confidentiality directly.
C. Integrity: DH offers no mechanism to verify that data has not been altered. This is the function of hashing algorithms like SHA-256.
D. Non-repudiation: DH does not provide proof of origin. This is achieved with digital signature algorithms like RSA or ECDSA.
1. National Institute of Standards and Technology (NIST) Special Publication 800-56A Revision 3
Recommendation for Pair-wise Key-Establishment Schemes Using Discrete Logarithm Cryptography
April 2018.
Page 1
Section 1 (Introduction): "This Recommendation specifies key-establishment schemes... Such a scheme is called a Diffie-Hellman (DH) or Elliptic Curve Diffie-Hellman (ECDH) key-agreement scheme
and the process is called key agreement."
2. Internet Engineering Task Force (IETF) RFC 2631
Diffie-Hellman Key Agreement Method
June 1999.
Page 1
Section 1 (Introduction): "The Diffie-Hellman method allows two parties to agree upon a shared secret value in a manner that is secure against eavesdroppers. This value can then be converted into cryptographic keying material."
3. Rivest
R. L.
Shamir
A.
& Adleman
L. (1978). A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM
21(2)
120โ126.
Page 124
Section VI (Public-Key Schemes): The paper
while introducing RSA
references the work of Diffie and Hellman
stating
"Diffie and Hellman have proposed a scheme... in which user A can send a message to user B so that only B can read it." It clarifies this is achieved by first establishing a key
describing the DH protocol as a "public-key distribution system." DOI: https://doi.org/10.1145/359340.359342
4. Katz
J.
& Lindell
Y. (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press.
Page 356
Section 10.3 (The Diffie-Hellman Protocols): "The Diffie-Hellman key-exchange protocol is a method by which two parties can compute a shared key... The protocol is secure against an eavesdropper who observes the entire interaction."
The ISO/OSI model is a conceptual framework that standardizes the functions of a network into seven logical layers. Its primary purpose is to guide product development and foster interoperability between different vendors and network technologies. It is a reference model, not an implementation or a protocol used for active network management. The task of querying network devices for operational statistics like packet counts and routing tables is performed by a network management protocol, such as the Simple Network Management Protocol (SNMP), which itself operates at the Application Layer (Layer 7) of the OSI model.
A. The OSI model is a fundamental standard reference model for network communications, designed to ensure interoperability.
C. A core objective of the OSI model is to provide a standardized framework that allows dissimilar systems to communicate effectively.
D. The model is explicitly defined by its structure of seven distinct layers, often referred to as the OSI protocol stack.
---
1. ISO/IEC 7498-1:1994
Information technology โ Open Systems Interconnection โ Basic Reference Model: The Basic Model. Section 1
"Scope
" states
"This Reference Model provides a common basis for the coordination of standards development for the purpose of systems interconnection..." It defines the model's purpose as a standard for interoperability
not an active management tool.
2. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 1
Section 1.5
"Network Layers
" the text describes the OSI model as a conceptual and architectural framework with seven layers
contrasting it with the five-layer Internet protocol stack. It does not describe it as a tool for querying devices.
3. Stallings
W. (2014). Data and Computer Communications (10th ed.). Pearson. Chapter 2
"Protocol Architecture
TCP/IP
and Internet-Based Applications
" describes the OSI model as a "structured set of protocols in layers" and a "model for a computer network architecture." This confirms its role as a standard model (A
D) for enabling communication (C).
4. Case
J.
Mundy
R.
Partain
D.
& Stewart
B. (2002). RFC 3411: An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks. IETF. The abstract clearly defines SNMP's purpose: "An SNMP management system contains... an agent
which has access to management instrumentation." This instrumentation includes data like packet counts
which is distinct from the OSI model's function.
Simple Network Management Protocol (SNMP) is designed for managing and monitoring devices on an internal network. Exposing SNMP to the internet presents a significant security risk, as it can reveal sensitive network configuration, device status, and performance data. Attackers can exploit this information for network reconnaissance or launch denial-of-service attacks. Best security practices dictate that SNMP traffic should be blocked at the perimeter firewall and only permitted on trusted, internal management networks.
B. SMTP: Simple Mail Transfer Protocol is essential for sending and receiving email from the internet. It is normally allowed through a firewall to a designated mail server.
C. HTTP: Hypertext Transfer Protocol is the primary protocol for web browsing. Outbound HTTP is required for users to access websites, and inbound is needed to host a public web server.
D. SSH: Secure Shell provides encrypted remote administration. While it must be carefully secured, it is often permitted through a firewall for authorized administrators to manage systems remotely.
1. National Institute of Standards and Technology (NIST). (2009). Guidelines on Firewalls and Firewall Policy (Special Publication 800-41 Revision 1).
Page 29
Section 4.3.2
"Service-Specific Issues": "SNMP is another protocol that organizations should usually block at their firewalls. SNMP is used to remotely manage network devices. Attackers can use SNMP to gain extensive information on a networkโs configuration... Because of these risks
organizations should block SNMP at their firewalls."
2. Stallings
W.
& Brown
L. (2018). Computer Security: Principles and Practice (4th ed.). Pearson.
Chapter 21
"Firewalls and Intrusion Prevention Systems
" Section 21.2
"Firewall Characteristics": This section discusses firewall filtering policies. It emphasizes that services intended only for local use
such as SNMP
should not be exposed externally. The principle is to deny all traffic by default and only permit services that are explicitly required for business with the external world.
3. Carnegie Mellon University
Software Engineering Institute. (2002). State of the Practice of Firewall Deployment and Management (CMU/SEI-2002-TN-013).
Page 16
Section 3.2.2
"Filtering Rules": The document advises filtering protocols that provide information about the internal network to outsiders. It lists SNMP as a protocol that is "often filtered" at the firewall boundary due to the valuable network information it can provide to an attacker.
The Address Resolution Protocol (ARP) is a communication protocol used for discovering the link-layer address, such as a MAC address, associated with a given internet-layer address, typically an IPv4 address. When a host needs to send a packet to another host on the same local network, it knows the destination IP address but not the hardware (MAC) address. ARP sends a broadcast request packet to all devices on the local network asking which device is using that specific IP address. The device with the corresponding IP address replies with its MAC address, allowing the original host to encapsulate the IP packet in a Layer 2 frame and send it directly.
A. Routing tables: These are used by routers to determine the path and next-hop IP address for a packet, not to resolve a specific IP to a hardware address.
C. Reverse address resolution protocol (RARP): This protocol performs the opposite function of ARP; it maps a known hardware address to an IP address.
D. Internet Control Message Protocol (ICMP): This protocol is used for network diagnostics and to report errors in IP packet processing (e.g., ping, destination unreachable).
1. Plummer
D. C. (November 1982). RFC 826: An Ethernet Address Resolution Protocol. Internet Engineering Task Force (IETF). "Abstract: A protocol for mapping dynamically between a 32-bit Internet Address and a 48-bit Ethernet address is presented." Retrieved from https://datatracker.ietf.org/doc/html/rfc826
2. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 5
Section 5.4.1
"The Link Layer: Links
Access Networks
and LANs
" the text states
"the Address Resolution Protocol (ARP)... The role of ARP is to translate IP addresses to link-layer addresses."
3. MIT OpenCourseWare. (Spring 2018). 6.033 Computer System Engineering
Lecture 15: The Network Layer. Massachusetts Institute of Technology. The lecture notes describe ARP's function: "How does a host A find out the Ethernet address for a host B on the same physical network
given B's IP address? Address Resolution Protocol (ARP)." (p. 5).
A packet-filtering firewall evaluates the network-layer headers (IP, TCP/UDP/ICMP) of each packet and applies rules that โpermitโ or โdenyโ traffic based on source/destination addresses, protocol type, andโcriticallyโTCP or UDP port (service) numbers. Administrators therefore create rules that open only the port numbers explicitly authorized for business-necessary applications and block all others. By design, the firewall does not knowingly allow unauthorized or undefined ports, nor does it deal with vague โex-serviceโ or โintegerโ concepts; thus option A precisely states its selective-allow capability.
B. โUnauthorizedโ ports are explicitly blocked, not enabled, by packet-filtering policy.
C. โEx-service numbersโ is not a recognized networking term; statement is imprecise.
D. Although ports are integers, โservice integersโ is non-standard phrasing; answer is vague and less accurate than A.
1. NIST Special Publication 800-41 Rev.1
โGuidelines on Firewalls and Firewall Policy
โ ยง3.1.2 Packet-Filtering Firewalls
pp. 19-20.
2. W. R. Cheswick
S. M. Bellovin & A. D. Rubin
โFirewalls and Internet Security
โ 2nd ed.
Addison-Wesley
2003
Ch. 6
pp. 181-182.
3. Cisco Systems
โIP Access List Overview
โ Cisco IOS 15 Configuration Guide
ยงโStandard vs. Extended ACLs
โ 2021
para. 2.
4. Stanford University CS244E
โFirewalls and Packet Filteringโ lecture notes
Slide 8
2020.
Confidentiality in electronic communication is achieved by ensuring that only the intended recipient can read the message. In an asymmetric (public-key) cryptographic system, this is accomplished when the sender encrypts the message using the recipient's publicly available key. The resulting ciphertext can only be decrypted by the corresponding private key, which is held exclusively by the recipient. This process ensures that even if the message is intercepted, its contents remain secret from any unauthorized party. This is a fundamental principle of public-key infrastructure (PKI) used in secure email standards like S/MIME and PGP.
A. The sender encrypting it with its private key.
This action creates a digital signature, which provides authentication, integrity, and non-repudiation, not confidentiality. Anyone with the sender's public key can decrypt it.
B. The sender encrypting it with its public key.
Encrypting with one's own public key is not useful for communication, as only the sender (who holds the private key) could decrypt it.
D. The sender encrypting it with the receiver's private key.
The sender should never have access to the receiver's private key. A private key must remain secret to its owner to maintain the security of the system.
---
1. National Institute of Standards and Technology (NIST) Special Publication 800-32
Introduction to Public Key Technology and the Federal PKI Infrastructure.
Section 2.2
"Public Key Cryptography
" Paragraph 3: "To provide confidentiality for a message
the sender encrypts the message with the public key of the intended recipient. The recipient then uses his/her private key to decrypt the message. Only the recipient has the private key that corresponds to the public key and is therefore the only person who can decrypt the message."
2. Internet Engineering Task Force (IETF) RFC 4880
OpenPGP Message Format.
Section 2.1
"Public-Key-Encrypted Messages": This section details the process where a one-time session key is generated
used to encrypt the message data
and then this session key itself is encrypted with the recipient's public key. This ensures that only the holder of the corresponding private key can decrypt the session key and
subsequently
the message.
3. Pfleeger
C. P.
Pfleeger
S. L.
& Margulies
J. (2015). Security in Computing (5th ed.). Pearson Education.
Chapter 2
"Cryptography
" Section 2.3
"Public Key Encryption": The text explains
"To send a secure message to [a recipient]
you fetch a copy of [their] public key... You then encrypt your message using that public key... When [the recipient] receives the ciphertext
[they] decrypt it with [their] private key." This academic text confirms the standard procedure for ensuring confidentiality.
A dynamic packet-filtering firewall, also known as a stateful inspection firewall, operates by maintaining a state table that tracks active network connections. When an internal client initiates an outgoing request, the firewall adds an entry to this table with details like source/destination IP addresses and port numbers. It then dynamically creates a temporary rule allowing the expected incoming reply packets that match the state table entry. Once the session terminates, the entry and its associated temporary rule are removed. This mechanism is precisely what is described in the question.
A. packet filtering: This is a stateless firewall that evaluates each packet in isolation against a static access control list (ACL) and does not track connection states.
B. Circuit level proxy: This firewall validates TCP handshakes at the session layer but does not typically inspect individual packets or create dynamic packet-level ACLs in this manner.
D. Application level proxy: This firewall acts as an intermediary for specific applications (e.g., HTTP), inspecting content at the application layer, which is a different and more complex mechanism.
1. National Institute of Standards and Technology (NIST) Special Publication 800-41 Revision 1
Guidelines on Firewalls and Firewall Policy. Section 2.1.2
"Stateful Inspection Firewalls
" states: "Stateful inspection firewalls... are able to determine if a packet is part of an existing
valid connection... When a connection is initiated
the firewall adds the connection to its state table. From that point forward
any packets that are part of that connection are allowed to pass without being re-evaluated against the rule set." This describes the dynamic allowance of return traffic.
2. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 8.6.1
"Firewalls
" the text describes stateful packet filters: "A stateful filter tracks the state of TCP connections... The filter can then determine whether an arriving TCP segment is a legitimate part of an established connection... or is a bogus TCP segment." This tracking and dynamic allowance based on connection state is the core of the question.
3. Stallings
W.
& Brown
L. (2018). Computer Security: Principles and Practice (4th ed.). Pearson. In Chapter 21.1
"Firewall Characteristics
" the text differentiates between static packet filters and stateful inspection firewalls
explaining that the latter "maintains a directory of outbound TCP connections... An entry is made for each established connection. The packet filter will now allow incoming traffic to high-numbered ports only for those packets that fit the profile of one of the entries in the directory." This directly supports the chosen answer.
Layer 4 of the Open Systems Interconnection (OSI) model is the Transport Layer. This layer is responsible for providing reliable, end-to-end communication services for applications. It segments data from the upper layers, establishes and terminates connections, and handles error control and flow control to ensure data is transferred completely and in the correct order. The two most common protocols at this layer are the Transmission Control Protocol (TCP), which is connection-oriented and reliable, and the User Datagram Protocol (UDP), which is connectionless and offers faster but less reliable data transfer.
A. The data link layer is Layer 2, which manages node-to-node data transfer between two directly connected nodes and handles physical addressing.
C. The network layer is Layer 3, which is responsible for logical addressing (e.g., IP addresses) and routing packets across multiple networks.
D. The presentation layer is Layer 6, which translates, encrypts, and compresses data to a format acceptable for the application layer.
1. International Organization for Standardization. (1994). ISO/IEC 7498-1:1994 Information technology โ Open Systems Interconnection โ Basic Reference Model: The Basic Model. Section 7.4
"Transport layer
" pp. 29-35. This standard formally defines the seven layers of the OSI model
explicitly identifying Layer 4 as the Transport layer.
2. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. Chapter 1
Section 1.5.1
"The OSI model
" provides a clear description of the seven layers
stating
"The transport layer (layer 4) transports application-layer messages between application endpoints."
3. Balakrishnan
H.
& Terman
C. (2012). 6.02 Introduction to EECS II: Digital Communication Systems
Fall 2012. Massachusetts Institute of Technology: MIT OpenCourseWare. Lecture 1 Notes
"Introduction and The Five-Layer Internet Model
" p. 4. The lecture notes map the Internet model to the OSI model
identifying the Transport Layer as Layer 4.
Link encryption operates on a hop-by-hop basis, securing data only over a specific physical link, such as between a workstation and a switch or between two routers. At each intermediate node (e.g., a router), the entire packet is decrypted so the device can read the header and routing information. The packet is then re-encrypted before being sent to the next hop. Because the data is in plaintext inside each intermediate device, it is not continuously encrypted from the original source to the final destination. Statement C describes end-to-end encryption, which is a different security mechanism.
A. This is a true statement. Link encryption is specifically applied to encrypt all data traffic traversing a single, specific communication link or path segment.
B. This is a true statement. The primary purpose of link encryption is to protect the confidentiality of data as it crosses a vulnerable link, directly countering eavesdropping.
D. This is a true statement. A defining feature of link encryption is that it encrypts the entire packet or frame, including headers, trailers, and routing information, not just the user payload.
1. Stallings
W. (2020). Cryptography and Network Security: Principles and Practice (8th ed.). Pearson. In Section 19.1
"Link-Versus End-to-End Encryption
" the text states
"With link encryption
each vulnerable communications link is equipped on both ends with an encryption device... the message is decrypted and then encrypted again. Thus
the entire message is in the clear in each switch." This directly confirms that information is not encrypted for its entire journey (refuting C) and that all data
including headers
is encrypted on the link (supporting D).
2. Whitman
M. E.
& Mattord
H. J. (2021). Principles of Information Security (7th ed.). Cengage Learning. In Chapter 8
"Cryptology
" the distinction is made: "In link encryption
the data are encrypted right before they are placed on the physical communications link... In end-to-end encryption
the encryption is usually initiated at the OSI modelโs application layer." This highlights that link encryption is tied to the link
not the entire journey.
3. Kurose
J. F.
& Ross
K. W. (2021). Computer Networking: A Top-Down Approach (8th ed.). Pearson. Chapter 8
"Security in Computer Networks
" discusses security at different layers. Link-layer security protocols like WPA3 are described as securing the link between a client and a wireless access point
but not beyond. This illustrates the hop-by-hop nature of link encryption
contrasting with end-to-end protocols like TLS.
The International Data Encryption Algorithm (IDEA) is a symmetric-key block cipher designed by Xuejia Lai and James L. Massey. It operates on 64-bit blocks of data. The algorithm's design specifies a fixed key size of 128 bits. This 128-bit key is used to generate a series of 52 16-bit subkeys that are applied during the eight rounds of encryption. The combination of a 128-bit key and a complex round structure provides its security against common cryptanalytic attacks like differential and linear cryptanalysis.
A. 64 bits: This is the block size of the IDEA cipher, which is the amount of data encrypted at one time, not the length of the secret key.
C. 160 bits: This size is not associated with IDEA. It is commonly the output size for hash functions like SHA-1 or a key size for certain elliptic curve algorithms.
D. 192 bits: This is one of the valid key sizes for the Advanced Encryption Standard (AES), specifically AES-192, but it is not used by IDEA.
1. Lai
X.
& Massey
J. L. (1991). A Proposal for a New Block Encryption Standard. In D. W. Davies (Ed.)
Advances in Cryptology โ EUROCRYPT โ91 (Lecture Notes in Computer Science
Vol. 547
pp. 389โ404). Springer-Verlag. In Section 2
"Description of the Proposed Cipher
" the authors state
"The proposed cipher is an iterated cipher with a block size of 64 bits and a key of 128 bits." (p. 390). DOI: https://doi.org/10.1007/3-540-46416-636
2. Menezes
A. J.
van Oorschot
P. C.
& Vanstone
S. A. (1996). Handbook of Applied Cryptography. CRC Press. In Chapter 7
"Block Ciphers
" Section 7.6
"IDEA
" it is stated
"IDEA is a 64-bit iterated block cipher with a 128-bit key." (p. 266).
3. Daemen
J.
& Rijmen
V. (2002). The Design of Rijndael: AES - The Advanced Encryption Standard. Springer. In Chapter 2
"Block Cipher Principles
" Section 2.4.3
"IDEA
" the authors note
"IDEA has a block length of 64 bits and a key length of 128 bits." (p. 22).
The One-time pad (OTP) is a stream cipher defined by its unique properties. It requires a pre-shared secret key that is truly random and at least as long as the plaintext message. The encryption process involves a modular addition of the plaintext with the key to produce the ciphertext. For binary data, this operation is equivalent to a bitwise XOR. When implemented correctlyโmeaning the key is random, used only once, and kept secretโthe OTP provides perfect secrecy and is theoretically unbreakable. The question's description precisely matches the operational mechanism of the one-time pad.
A. Running key cipher: This cipher uses a long, non-random key, such as text from a book, making it susceptible to cryptanalysis, unlike a true one-time pad.
C. Steganography: This is the practice of concealing the existence of a message within another medium, not a method of encrypting its content with a key.
D. Cipher block chaining: This is a mode of operation for block ciphers, not a cipher itself. It uses a fixed-length key, which is not equal to the message length.
1. Katz
J.
& Lindell
Y. (2021). Introduction to Modern Cryptography (3rd ed.). CRC Press. In Chapter 2
Section 2.1
"The One-Time Pad
" it is stated: "Let the plaintext be a bit string of length โ. The key is a uniformly chosen bit string of the same length โ... Encryption is done by XORing the plaintext and the key." (p. 26).
2. Stallings
W. (2017). Cryptography and Network Security: Principles and Practice (7th ed.). Pearson. In Chapter 2
Section 2.3
"One-Time Pad
" the author describes the cipher: "The one-time pad... uses a random key that is as long as the message... The encryption operation is the exclusive-OR (XOR)." (p. 41).
3. University of California
Berkeley. (2020). CS 161: Computer Security
Lecture 6: "Stream Ciphers & One-Time Pad". The lecture notes specify the requirements for a one-time pad: "Key is a truly random sequence of bits of the same length as the message... C = P โ K". (Slide 18).
The Internet Key Exchange (IKE) protocol is a core component of the IPsec protocol suite. Its primary role is to establish a Security Association (SA) between two communicating endpoints. This process involves two critical functions: first, it performs mutual authentication to verify the identity of the peers (e.g., using pre-shared keys or digital certificates). Second, it negotiates the cryptographic keys and algorithms that will be used by the other IPsec protocols (AH and ESP) to secure the actual data traffic. IKE itself does not encrypt or sign the data packets; it sets up the secure channel for those operations to occur.
B. data encryption: This is incorrect. Data encryption (confidentiality) is the function of the Encapsulating Security Payload (ESP) protocol, which uses the keys established by IKE.
C. data signature: This is incorrect. Data integrity and origin authentication are provided by the Authentication Header (AH) and ESP protocols, not directly by IKE.
D. enforcing quality of service: This is incorrect. Quality of Service (QoS) is a network traffic management mechanism and is not a function of the IKE protocol.
1. Kent
S.
& Seo
K. (2005). Security Architecture for the Internet Protocol. RFC 4301. In Section 4.5
"Key Management"
it states
"The mandatory-to-implement key management protocol for IPsec is the Internet Key Exchange (IKEv2)... IKE can be used to establish SAs for AH and/or ESP. It includes provisions for peer authentication
negotiation of security services
and key generation/refreshing." (Page 19).
2. National Institute of Standards and Technology (NIST). (2020). Special Publication 800-77 Rev. 1: Guide to IPsec VPNs. Section 2.3
"Internet Key Exchange
" states
"IKE is the component of IPsec that provides for the authentication of the IPsec peers
negotiation of IKE and IPsec security services
and generation of the keys used by the IPsec security services." (Page 8).
DOI: https://doi.org/10.6028/NIST.SP.800-77r1
3. Kaufman
C.
Hoffman
P.
Eronen
P.
& Nir
Y. (2014). Internet Key Exchange Protocol Version 2 (IKEv2). RFC 7296. The abstract states
"IKE is a component of IPsec used for performing mutual authentication and establishing and maintaining Security Associations (SAs)." (Page 1).
4. Massachusetts Institute of Technology (MIT) OpenCourseWare. (2014). 6.857 Computer and Network Security
Lecture 15: Network Security II. The lecture notes describe the IPsec architecture
stating that IKE is used to "Establish a shared key via Diffie-Hellman" and "Authenticate each other
" which are the core components of key exchange and peer authentication. (Slide 18).
The X.400 standard, developed by the ITU-T, is a suite of recommendations that defines the architecture and protocols for Message Handling Systems (MHS). It provides a framework for a global electronic messaging service, often considered a predecessor to modern internet email. Its primary concern is the store-and-forward handling of electronic messages between users, specifying components like User Agents (UA) and Message Transfer Agents (MTA) and the protocols for their interaction. The standard ensures interoperability between different messaging systems that conform to its specifications.
B. X.500: This is a series of standards for directory services, defining how to build and access a global, distributed directory, not for handling the messages themselves.
C. X.509: This standard defines the format for public key certificates used in a Public Key Infrastructure (PKI), which is for identity verification, not message transport.
D. X.800: This standard defines the security architecture for Open Systems Interconnection (OSI), providing a general framework for security services, not a specific message handling protocol.
1. International Telecommunication Union (ITU). (1999). Recommendation ITU-T X.400 | ISO/IEC 10021-1: Information technology โ Message handling systems (MHS): System and service overview. Section 1
"Scope
" states
"This Recommendation | International Standard is an overview of the MHS model
the services provided by MHS
and the protocols used in MHS."
2. International Telecommunication Union (ITU). (2019). Recommendation ITU-T X.500 | ISO/IEC 9594-1: Information technology โ Open Systems Interconnection โ The Directory: Overview of concepts
models and services. Section 1
"Scope
" defines the standard as providing an overview of "The Directory
" a service for looking up information about objects.
3. International Telecommunication Union (ITU). (2019). Recommendation ITU-T X.509 | ISO/IEC 9594-8: Information technology โ Open Systems Interconnection โ The Directory: Public-key and attribute certificate frameworks. Section 1
"Scope
" specifies that the standard defines frameworks for public-key certificates and attribute certificates.
4. International Telecommunication Union (ITU). (1991). Recommendation ITU-T X.800: Security architecture for Open Systems Interconnection for CCITT applications. Section 1
"Scope
" states
"This Recommendation defines the general security-related architectural elements which can be applied appropriately in the circumstances for which security protection is required."
5. Radicati
S. (1992). X.400 and SMTP: Battle of the E-mail Standards. Van Nostrand Reinhold. Chapter 2
"The X.400 Message Handling System
" pp. 11-13
describes the fundamental purpose and architecture of X.400 as a comprehensive system for electronic message exchange. (This is a peer-reviewed academic/technical publication).
Transport Layer Security (TLS) is architecturally composed of two primary layers. The lower layer is the TLS Record Protocol, which is responsible for securing application data using the parameters established during the handshake. The upper layer consists of three sub-protocols that manage the connection: the TLS Handshake Protocol, the Change Cipher Spec Protocol, and the Alert Protocol. The TLS Handshake Protocol is the most significant of these, as it is used by the client and server to authenticate each other, negotiate a cipher suite, and establish the shared secret keys that will be used for the session.
A. The Internet Protocol (IP) operates at the Network Layer (Layer 3) of the OSI model, whereas TLS operates above the Transport Layer (Layer 4).
B. This is a generic and inaccurate term. The TLS Record Protocol is the specific component that handles the fragmentation, compression, and protection of application data.
C. A "Link Protocol" refers to the Data Link Layer (Layer 2). TLS does not operate at this layer; it is a higher-level protocol.
1. Dierks
T.
& Rescorla
E. (2008). The Transport Layer Security (TLS) Protocol Version 1.2. IETF. RFC 5246. Section 6
"TLS Protocols
" states: "At the core of the TLS protocol is the TLS Record Protocol... The TLS Handshake Protocol is layered on top of the TLS Record Protocol."
2. Rescorla
E. (2018). The Transport Layer Security (TLS) Protocol Version 1.3. IETF. RFC 8446. Section 2
"Protocol Overview
" describes the layered architecture: "At the lowest level
the Record Protocol... At the highest level
the following four protocols are defined: the Handshake Protocol
the Alert Protocol
the Change Cipher Spec Protocol
and the Application Data Protocol."
3. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 8.3
"Securing TCP Connections: TLS
" the text describes the two main sub-protocols: "The TLS Record Protocol" and "The Handshake Protocol."
Blowfish was designed in 1993 by Bruce Schneier as a fast, free, and public-domain alternative to existing proprietary encryption algorithms. A primary design goal was for it to be unpatented, license-free, and available for all uses without restriction. In contrast, RC2 and RC4 were developed by Ron Rivest for RSA Security and were initially maintained as proprietary trade secrets. Skipjack was developed by the U.S. National Security Agency (NSA) as a classified algorithm for use in the Clipper chip, making it a government-proprietary system until its declassification in 1998. Therefore, Blowfish is the only algorithm listed that was not designed to be proprietary.
A. RC2: Was designed by RSA Security as a proprietary trade secret, intended as a drop-in replacement for DES.
B. RC4: Was also designed for RSA Security as a proprietary trade secret until its source code was anonymously leaked in 1994.
D. Skipjack: Was a classified, government-proprietary algorithm developed by the U.S. NSA for its controversial Clipper chip initiative.
1. Schneier
B. (1994). Description of a New Variable-Length Key
64-Bit Block Cipher (Blowfish). In: Anderson
R. (eds) Fast Software Encryption. FSE 1993. Lecture Notes in Computer Science
vol 809. Springer
Berlin
Heidelberg. On page 191
the introduction states
"Blowfish is unpatented and license-free
and is available free for all uses." DOI: https://doi.org/10.1007/3-540-58108-124
2. National Institute of Standards and Technology (NIST). (1998
May 29). SKIPJACK and KEA Algorithm Specifications Version 2.0. Page 1
Section 1
"Introduction
" states
"The SKIPJACK algorithm was developed by the U.S. Government... The algorithm is classified..." This document marks its declassification for public evaluation.
3. Rivest
R. (1998). RFC 2268: A Description of the RC2(r) Encryption Algorithm. Internet Engineering Task Force (IETF). Section 1
"Introduction
" notes that RC2 is a proprietary algorithm of RSA Data Security
Inc.
4. Kaufman
C.
Perlman
R.
& Speciner
M. (2002). Network Security: Private Communication in a Public World (2nd ed.). Prentice Hall. In Chapter 14
"Algorithms
" the text discusses the history of RC4 as a trade secret of RSA Security until it was leaked. It also describes Skipjack's origin with the NSA and the Clipper chip. (Specific reference: Chapter 14
Section 14.3 "Stream Ciphers" for RC4; Section 14.2 "Block Ciphers" for Skipjack).
The Network layer (Layer 3) of the OSI/ISO model is responsible for providing the functional and procedural means of transferring variable-length data sequences from a source host on one network to a destination host on a different network. Its primary function is path determination, or routing. It uses logical addressing (e.g., IP addresses) to identify hosts and employs routing protocols to calculate the best path for packets to traverse the internetwork. This layer encapsulates segments from the Transport layer into packets and passes them down to the Data Link layer.
A. Session layer: Manages, establishes, and terminates connections (sessions) between applications. It does not handle data routing.
B. Physical layer: Responsible for the transmission and reception of unstructured raw bit streams over a physical medium; it has no concept of routing.
D. Transport layer: Provides reliable end-to-end data transfer and error correction but relies on the Network layer to route the data segments.
1. Tanenbaum
A. S.
& Wetherall
D. J. (2011). Computer Networks (5th ed.). Pearson Education. In Chapter 5
"The Network Layer
" Section 5.1
the text states
"The network layer is concerned with getting packets from the source all the way to the destination... a key design issue is determining how packets are routed from source to destination." (p. 379).
2. ISO/IEC 7498-1:1994
Information technology โ Open Systems Interconnection โ Basic Reference Model: The Basic Model. Section 7.5.1
"Purpose of the Network Layer
" states that this layer provides the means to transfer data between end systems
and Section 7.5.4.2
"Routing
" explicitly lists routing as a function of the Network Layer.
3. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. Chapter 4
"The Network Layer: Data Plane
" introduces the layer's two key functions: "forwarding" and "routing." The routing function is described as determining the route or path taken by packets as they flow from a sender to a receiver (p. 306).
4. Dordal
P. L. (2019). An Introduction to Computer Networks (2.0 ed.). Loyola University Chicago
Department of Computer Science. Chapter 1
"An Overview of Networks
" Section 1.6
"The Network Layer
" states
"The network layer is responsible for routing packets from a source to a destination." (p. 23).
Fiber optic cables transmit data as pulses of light through thin strands of glass or plastic. Since the signal is light (photons) rather than an electrical current (electrons), it is not affected by external electromagnetic fields. This inherent immunity to electromagnetic interference (EMI) and radio frequency interference (RFI) prevents signal degradation from nearby power lines, motors, or other sources of electrical noise. This property, combined with extremely low signal attenuation, allows fiber optic cables to be used over distances of many kilometers, far exceeding the length limitations of copper-based cabling.
B. Coaxial cable: While it has a metallic shield to protect against EMI, it is not completely immune because it still transmits electrical signals through a copper conductor.
C. Twisted Pair cable: This cable type uses the twisting of copper wire pairs to cancel out EMI, but it remains susceptible to interference, limiting its effective length (typically to 100 meters).
D. Axial cable: This is not a standard term for a type of network cabling and serves as a distractor.
---
1. The Fiber Optic Association (FOA). (n.d.). The FOA Guide to Fiber Optics & Premises Cabling. In "Fiber Vs. Copper". The guide states
"Fiber is also immune to EMI (electromagnetic interference). The fiber itself is made of glass
which is an insulator
so no electric current can flow along it... This is a big advantage in many industrial and urban environments." It also notes
"Because of lower attenuation and no interference
fiber can be run much longer distances than copper." Retrieved from the FOA Online Reference Guide.
2. Kurose
J. F.
& Ross
K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 2
Section 2.2.2 "Physical Media
" the text describes guided media. When discussing fiber optics
it highlights its immunity to electromagnetic interference and its low signal attenuation
which allows for high-speed data transmission over distances ranging from 500 meters to hundreds of kilometers.
3. Gallager
R. G.
& Bertsekas
D. (2012). 6.02 Introduction to EECS II: Digital Communication Systems
Course Notes. Massachusetts Institute of Technology: MIT OpenCourseWare. Chapter 7
"Channels
" discusses different transmission media. Section 7.1.3
"Optical Fiber
" notes that "Optical fiber is immune to electromagnetic interference and has very low attenuation
" contrasting it with copper wires like twisted pair and coaxial cable which are described as being susceptible to such interference.
Layer 2 Forwarding (L2F) is a tunneling protocol developed by Cisco that is now considered obsolete. It was a precursor to the Layer 2 Tunneling Protocol (L2TP), which was created by the Internet Engineering Task Force (IETF) to merge the best features of L2F and Microsoft's Point-to-Point Tunneling Protocol (PPTP). Because L2TP became the industry standard, L2F is no longer developed or used in modern network environments, making it the least likely protocol on this list to be used for creating a VPN today.
A. L2TP is still used, typically in conjunction with IPSec for encryption, and is supported by many modern operating systems and network devices.
B. PPTP, while heavily deprecated due to significant security vulnerabilities, was widely adopted and may still be found in legacy systems or specific use cases.
C. IPSec is a secure, robust, and widely implemented protocol suite that forms the basis for most modern, standards-based VPNs.
1. Townsley
W.
Valencia
A.
Rubens
A.
Pall
G.
Zorn
G.
& Palter
B. (1999). RFC 2661: Layer Two Tunneling Protocol "L2TP". IETF. Section 1.1
"Introduction and Protocol Overview
" states
"L2TP represents a synthesis of the best features of two earlier tunneling protocols: Cisco's Layer 2 Forwarding (L2F) and Microsoft's Point-to-Point Tunneling Protocol (PPTP)." This document establishes L2TP as the standardized successor to L2F. Available at: https://doi.org/10.17487/RFC2661
2. Kaufman
C.
Perlman
R.
& Speciner
M. (2002). Network Security: Private Communication in a Public World (2nd ed.). Prentice Hall. In Chapter 18
"Tunneling (VPNs)
" the text discusses the history of VPN protocols
noting that L2F and PPTP were competing proprietary protocols that were ultimately superseded by the IETF standard L2TP.
3. Goralski
W. (2017). The Illustrated Network: How TCP/IP Works in a Modern Network (2nd ed.). Morgan Kaufmann. Chapter 15
"Virtual Private Networks (VPNs)
" describes L2F as a "Cisco-proprietary" protocol that was "combined with PPTP to form L2TP
" highlighting its replacement and subsequent obsolescence.
The Open Systems Interconnection (OSI) model divides network functions into seven layers. The Data Link Layer (Layer 2) is responsible for node-to-node data transfer and is subdivided into two sublayers: the Logical Link Control (LLC) sublayer and the Media Access Control (MAC) sublayer. The MAC sublayer interfaces directly with the Physical Layer (Layer 1) and is responsible for controlling how devices in a network gain access to the medium and for the physical addressing (MAC addresses) of network interface cards.
A. The Transport layer (Layer 4) provides end-to-end communication services for applications and is not involved with physical addressing or media access.
B. The Network layer (Layer 3) is responsible for logical addressing (e.g., IP addresses) and routing data packets between different networks.
D. The Physical layer (Layer 1) defines the physical and electrical specifications for transmitting raw bits over a medium but does not manage access or addressing.
1. IEEE Std 802-2014
IEEE Standard for Local and Metropolitan Area Networks: Overview and Architecture. Section 5.2.2
"Data Link Layer
" states: "The Data Link Layer is divided into two sublayers: Medium Access Control (MAC) and Logical Link Control (LLC)." Figure 2โ"Relationship of IEEE 802 standards to the OSI reference model"โvisually confirms this structure.
2. Kurose
J. F.
& Ross
K. W. (2021). Computer Networking: A Top-Down Approach (8th ed.). Pearson. Chapter 6
"The Link Layer: Links
Access Networks
and LANs
" Section 6.1
"Introduction to the Link Layer
" describes the Data Link Layer's services
including media access control.
3. Stallings
W. (2014). Data and Computer Communications (10th ed.). Pearson. Chapter 15
"Local Area Network Overview
" Section 15.2
"LAN Protocol Architecture
" details the IEEE 802 reference model
explicitly placing the MAC sublayer within the Data Link Layer
just above the Physical Layer.
Real-time replication continuously copies data from a primary system to a secondary, geographically separate system as transactions occur. This strategy directly addresses the requirements for "continuous data protection" and "minimal downtime." By maintaining an up-to-date copy of the data, the organization can achieve a near-zero Recovery Point Objective (RPO), meaning very little to no data is lost. In the event of a failure, a rapid failover to the replicated site allows for a very low Recovery Time Objective (RTO), ensuring services are restored quickly. This makes it the superior choice for critical systems demanding high availability and data resilience.
A. Full Backup: This is a point-in-time copy. It does not provide continuous protection, and any data created after the backup is lost upon failure.
C. Cold Storage: This is an archival solution for infrequently accessed data. It is characterized by low cost but very high retrieval times, making it unsuitable for quick recovery.
D. Incremental Backup: This captures changes since the last backup, but it is still a point-in-time method, not continuous. Restoration can be complex and slower than failover to a replicated site.
1. National Institute of Standards and Technology (NIST). (2010). Special Publication 800-34 Rev. 1
Contingency Planning Guide for Federal Information Systems.
Section 3.4.2
"Alternate Site
" describes a hot site
which is the fastest recovery option. It states
"A hot site is a fully operational facility... This type of site would have systems... with real-time mirroring/replication of the production site data." This directly links real-time replication to the goal of minimal downtime.
2. AWS Well-Architected Framework. (2023). Reliability Pillar Whitepaper.
Page 23
"Disaster recovery (DR) strategies
" compares different DR approaches. It contrasts Backup and Restore with strategies like Pilot Light
Warm Standby
and Multi-site active/active
all of which rely on data replication to achieve lower RPO and RTO. The document explicitly states that replication is used to "reduce recovery time."
3. Coulouris
G.
Dollimore
J.
& Kindberg
T. (2012). Distributed Systems: Concepts and Design (5th ed.). Pearson Education.
Chapter 18
"Replication
" Section 18.1
"Introduction
" explains that replication is a key technique for enhancing performance and increasing availability and fault tolerance. It states
"If data are replicated at two or more servers
then clients can access data from any of the servers... If one server fails
clients can still access the data at the other servers." This academic text establishes replication as the fundamental strategy for high availability.