In a neural network, the activation value of a neuron is determined by a combination of inputs from
the previous layer, the weights of the connections, and the bias at the neuron level. Here’s a detailed
breakdown:
Inputs for Activation Value:
Activation Values of Neurons in the Previous Layer: These are the outputs from neurons in the
preceding layer that serve as inputs to the current neuron.
Weights Assigned to the Connections: Each connection between neurons has an associated weight,
which determines the strength and direction of the input signal.
Individual Bias at the Neuron Level: Each neuron has a bias value that adjusts the input sum, allowing
the activation function to be shifted.
Calculation:
The activation value is computed by summing the weighted inputs from the previous layer and
adding the bias.
Formula: z=∑(wi
⋅
⋅
ai)+bz = \sum (w_i \cdot a_i) + bz=∑(wi
ai)+b, where wiw_iwiare the weights,
aia_iaiare the activation values from the previous layer, and bbb is the bias.
The activation function (e.g., sigmoid, ReLU) is then applied to this sum to get the final activation
value.
Why Option A is Correct:
Option A correctly identifies all components involved in computing the activation value: the
individual bias, the activation values of the previous layer, and the weights of the connections.
Eliminating Other Options:
B . Activation values of neurons in the previous layer, and weights assigned to the connections
between the neurons: This option misses the bias, which is crucial.
C . Individual bias at the neuron level, and weights assigned to the connections between the
neurons: This option misses the activation values from the previous layer.
D . Individual bias at the neuron level, and activation values of neurons in the previous layer: This
option misses the weights, which are essential.
Reference:
ISTQB CT-AI Syllabus, Section 6.1, Neural Networks, discusses the components and functioning of
neurons in a neural network.
"Neural Network Activation Functions" (ISTQB CT-AI Syllabus, Section 6.1.1).