fix(cosim): align golden_reference ADC sign conversion with RTL
The golden reference used (adc_val - 128) << 9 which subtracts 65536,
but the Verilog RTL computes {1'b0,adc,9'b0} - {1'b0,8'hFF,9'b0}/2
which subtracts 0xFF00 = 65280. This creates a constant 256-LSB DC
offset between the golden reference and RTL for all 256 ADC values.
The bit-accurate model in fpga_model.py already uses the correct RTL
formula. This aligns golden_reference.py to match.
Verified: all 256 ADC input values now produce zero offset against
fpga_model.py.
This commit is contained in:
@@ -292,7 +292,7 @@ def run_ddc(adc_samples):
|
|||||||
# adc_signed_w = {1'b0, adc_data, 9'b0} - {1'b0, 8'hFF, 9'b0}/2
|
# adc_signed_w = {1'b0, adc_data, 9'b0} - {1'b0, 8'hFF, 9'b0}/2
|
||||||
# Simplified: center around zero, scale to 18-bit
|
# Simplified: center around zero, scale to 18-bit
|
||||||
adc_val = int(adc_samples[n])
|
adc_val = int(adc_samples[n])
|
||||||
adc_signed = (adc_val - 128) << 9 # Approximate RTL sign conversion to 18-bit
|
adc_signed = (adc_val << 9) - 0xFF00 # Exact RTL: {1'b0,adc,9'b0} - {1'b0,8'hFF,9'b0}/2
|
||||||
adc_signed = saturate(adc_signed, 18)
|
adc_signed = saturate(adc_signed, 18)
|
||||||
|
|
||||||
# NCO lookup (ignoring dithering for golden reference)
|
# NCO lookup (ignoring dithering for golden reference)
|
||||||
|
|||||||
Reference in New Issue
Block a user