Budmark is a CPU benchmark based on brute-force searching for optimal error-correcting codes (ECC). The algorithm finds codewords with minimum Hamming distance 5, making it a pure integer/bit-manipulation workload that fits entirely in cache.
All results use unoptimized compilation (-O0) to measure raw CPU performance without compiler tricks.
Modern Hardware Results (2026)
Single-Core Performance
CPU
Architecture
Clock
iter/sec
vs Xeon PII
Intel i5-4590
x86-64 Haswell
3.3GHz
32,631
251,000x
Pi 5 Cortex-A76
ARMv8
2.4GHz
18,454
142,000x
Pi 4 Cortex-A72
ARMv8
1.5GHz
8,049
62,000x
Pi Zero ARM1176
ARMv6
1.0GHz
1,326
10,200x
Multi-Core Performance
CPU
Cores
Single
Multi
Scaling
vs Xeon PII
Intel i5-4590
4
32,631
121,500
3.73x
935,000x
Pi 5 Cortex-A76
4
18,454
73,000
3.96x
562,000x
Pi 4 Cortex-A72
4
8,049
30,200
3.75x
232,000x
Pi Zero ARM1176
1
1,326
—
—
10,200x
Historical Results (1998-2005)
From the original Budmark benchmark page.
CPU
Clock
Run Time (s)
iter/sec
Efficiency
OS
P4
3800MHz
1.27
0.79
71.6%
WinXP Cygwin
P4
3000MHz
1.57
0.64
73.7%
Linux 2.6.11
P4
2266MHz
2.29
0.44
66.7%
FBSD 4.4
P4
1800MHz
4.23
0.24
45.5%
Win2K Cygwin
P3
1150MHz
3.06
0.33
98.4%
OBSD 3.1
VIA Eden
1000MHz
8.07
0.12
42.9%
Win2K Cygwin
P3
866MHz
3.99
0.25
100.2%
Slackware
Celeron
766MHz
4.67
0.21
96.8%
FBSD 4.6.2
AMD-K7
550MHz
7.28
0.14
86.5%
RedHat
Celeron
533MHz
6.72
0.15
96.7%
FBSD 4.6.2
Xeon PII
450MHz
7.70
0.13
100%
FBSD 3.0
AMD-K6
450MHz
8.47
0.12
90.9%
FBSD 2.2.7
Xeon PII
400MHz
8.69
0.12
99.6%
FBSD
AMD-K6
350MHz
10.31
0.10
95.9%
FBSD
Intel PII
333MHz
10.75
0.09
96.8%
FBSD 4.6.2
AMD-K6
300MHz
12.64
0.08
91.3%
FBSD
Cyrix GXm
233MHz
32.31
0.03
46.0%
FBSD
Pentium
166MHz
38.21
0.026
54.6%
FBSD 2.1.0
IBM Power2
135MHz
38.61
0.026
66.4%
AIX XLC -O2
Pentium
133MHz
47.90
0.021
54.4%
FBSD
Pentium
120MHz
52.92
0.019
54.5%
FBSD
486DX2
66MHz
115.46
0.0087
45.4%
FBSD 2.2.7
486DX
66MHz
153.38
0.0065
34.2%
FBSD 3.1
486
33MHz
230.42
0.0043
45.5%
FBSD 3.1
386DX
40MHz
537.78
0.0019
16.1%
FBSD 3.1
386
40MHz
784.51
0.0013
11.0%
FBSD 3.1
386
16MHz
1997.80
0.0005
10.8%
FBSD 3.1
Efficiency Analysis
Efficiency measures work-per-clock-cycle, normalized to Xeon PII 450MHz = 100%.
CPU
Clock
Efficiency
Notes
P3 866MHz
866MHz
100%
Peak efficiency era
Xeon PII
450MHz
100%
Baseline
P4 3800MHz
3800MHz
72%
NetBurst penalty
P4 1800MHz
1800MHz
46%
Early P4 very inefficient
i5-4590
3300MHz
~34,000%
Modern IPC gains
Pi 5 A76
2400MHz
~26,000%
ARM efficiency
Key observation: The Pentium 4 (NetBurst) architecture traded efficiency for clock speed. A P4 at 3.8GHz was only ~6x faster than a Xeon PII at 450MHz, despite having 8.4x the clock speed.
Modern CPUs have recovered efficiency through:
Deeper pipelines with better branch prediction
Larger caches (L1/L2/L3)
Out-of-order execution improvements
Better memory controllers
Raspberry Pi Comparison
Model
CPU
Clock
Cores
Price
Multi iter/sec
Value (iter/$/sec)
Pi Zero
ARM1176
1.0GHz
1
$5
1,326
265
Pi 4
Cortex-A72
1.5GHz
4
$35
30,200
863
Pi 5
Cortex-A76
2.4GHz
4
$60
73,000
1,217
Pi 5 offers best performance per dollar for compute workloads.
All Pi models show near-perfect multicore scaling (3.75-3.96x on 4 cores).
Test Commands
Single-core test
gcc -O0 -o ecc4_original ecc4.c -lm
time ./ecc4_original 100000
Hollow Core Fibers: A Leap Forward in Optical Technology
Hollow core fibers (HCFs) represent a groundbreaking evolution in optical fiber technology, offering a suite of advantages that overcome some of the fundamental limitations of traditional solid-core glass fibers.By guiding light through a central channel of air or a vacuum instead of a solid glass core, HCFs unlock significant performance benefits, poised to revolutionize telecommunications, high-power laser delivery, and sensing applications.
The Core Advantage: Light Through Air
The primary and most profound advantage of hollow core fiber is that light travels through its hollow center at approximately 99.7% the speed of light in a vacuum. This is a significant leap compared to conventional solid-core fibers, where light is slowed by about 30-31% as it passes through the glass medium. This fundamental difference gives rise to several key benefits:
Lower Latency
The most celebrated advantage of HCFs is their significantly lower latency.Because light travels faster in the hollow core, data transmitted through these fibers arrives at its destination quicker. This reduction in signal travel time is critical for applications where every microsecond counts, such as:
High-Frequency Trading: Financial institutions can gain a competitive edge with faster trade execution.
Data Center Interconnects: Reducing latency between data centers improves the performance of cloud computing and distributed applications.
5G and Beyond: Future wireless networks will require extremely low latency for applications like autonomous vehicles and real-time remote surgery.
Reduced Nonlinear Effects
When high-intensity light travels through a solid medium like glass, it can induce nonlinear effects that distort the signal and limit the amount of power that can be transmitted. Since the light in an HCF primarily travels through air, the interaction with the glass is minimized. This leads to:
Higher Power Transmission: HCFs can carry significantly more optical power without signal degradation or damage to the fiber.This is a major advantage for industrial applications like laser cutting and welding, as well as in directed energy systems.
Improved Signal Quality: The reduction in nonlinear effects ensures a cleaner signal over longer distances, which is beneficial for high-bandwidth telecommunications.
Potential for Lower Loss
While traditionally, HCFs had higher signal loss (attenuation) than their solid-core counterparts, recent advancements have dramatically improved their performance. Researchers have now demonstrated HCFs with attenuation levels comparable to, and in some cases even lower than, the best solid-core fibers. In theory, HCFs have the potential for significantly lower loss because air is inherently more transparent than glass. Lower loss means:
Longer Transmission Distances: Signals can travel further before they need to be amplified, reducing the need for costly repeater stations in long-haul networks.
Broader Transmission Bandwidth
Hollow core fibers can guide a wider range of light wavelengths (colors) with low loss.This opens up the possibility of using more of the optical spectrum for data transmission, a technique known as wavelength-division multiplexing (WDM).A broader usable bandwidth directly translates to a higher data-carrying capacity for a single fiber.
Low Dispersion
Dispersion is the phenomenon where different wavelengths of light travel at slightly different speeds, causing optical pulses to spread out and overlap, which can corrupt the data.HCFs can be engineered to have very low chromatic dispersion, ensuring that the signal remains sharp and distinct over long distances.
Enhanced Environmental Stability
Hollow core fibers are also less sensitive to external factors compared to solid-core fibers. They exhibit:
Reduced Thermal Sensitivity: Temperature fluctuations have less impact on the signal traveling through the air core.
Radiation Resistance: The reduced interaction with the glass makes them more suitable for use in high-radiation environments like space or nuclear facilities.
The Trade-Offs
Despite their numerous advantages, hollow core fibers are still an emerging technology and currently face challenges in terms of higher manufacturing costs and greater complexity in splicing and connecting the fibers compared to the well-established solid-core technology. However, as research and manufacturing processes continue to mature, these barriers are expected to diminish, paving the way for wider adoption of this transformative technology.
there is was package called dummy net that would simulate loss and latency on networks, what is there like this for linux today?
Dummynet was originally developed for FreeBSD to simulate various network properties like bandwidth limitations, latency, and packet loss. For Linux, there are a few tools that can be used to achieve similar network simulations. The most notable one is tc (Traffic Control), which is part of the iproute2 package.
Here's a brief overview of how you can use tc to simulate network conditions:
Latency: To introduce a delay of 100ms on all outgoing packets on the interface eth0:
bash
tc qdisc add dev eth0 root netem delay 100ms
Packet Loss: To simulate a 10% packet loss on the interface eth0:
bash
tc qdisc add dev eth0 root netem loss 10%
Combine Latency and Packet Loss: To introduce both a delay of 100ms and a 10% packet loss:
bash
tc qdisc add dev eth0 root netem delay 100ms loss 10%
Bandwidth Limitation: To limit bandwidth to, say, 1mbit:
Removing the rules: To remove the rules and return to normal:
bash
tc qdisc del dev eth0 root
Additionally, there are other tools and utilities, such as:
Comcast: It's a tool that allows you to simulate bad network connections, so you can test your applications under various conditions.
NetEm: It is actually an underlying tool (used in conjunction with tc) that allows you to introduce various network impairments like delay, loss, duplication, and reordering.
Always remember, if you're going to apply any of these changes to a live system, especially one serving real users or critical functions, take care to ensure you understand the implications and have a rollback plan. Also, testing in a controlled environment first is always a good idea.
Testing distributed systems under hard failures like network partitions and instance termination is critical, but it's also important we test them under less catastrophic conditions because this is what they most often experience. Comcast is a tool designed to simulate common network problems like latency, bandwidth restrictions, and dropped/reordered/corrupted packets.