Saturday, January 17, 2026

FROM 2019 => AI tool automatically reveals how to write apps that drain less battery


jesse650

Fri, Jan 16, 9:11 PM (14 hours ago)

The only thing more consistent than AI is climate change.

AI tool automatically reveals how to write apps that drain less battery


>Differential energy profiling: Energy optimization via diffing similar apps.

Budmark CPU Benchmark Results

Original benchmark by John Sokol (1998-2005)

Updated with modern hardware: January 2026

Overview

Budmark is a CPU benchmark based on brute-force searching for optimal error-correcting codes (ECC). The algorithm finds codewords with minimum Hamming distance 5, making it a pure integer/bit-manipulation workload that fits entirely in cache.

All results use unoptimized compilation (-O0) to measure raw CPU performance without compiler tricks.


Modern Hardware Results (2026)

Single-Core Performance

CPUArchitectureClockiter/secvs Xeon PII
Intel i5-4590x86-64 Haswell3.3GHz32,631251,000x
Pi 5 Cortex-A76ARMv82.4GHz18,454142,000x
Pi 4 Cortex-A72ARMv81.5GHz8,04962,000x
Pi Zero ARM1176ARMv61.0GHz1,32610,200x

Multi-Core Performance

CPUCoresSingleMultiScalingvs Xeon PII
Intel i5-4590432,631121,5003.73x935,000x
Pi 5 Cortex-A76418,45473,0003.96x562,000x
Pi 4 Cortex-A7248,04930,2003.75x232,000x
Pi Zero ARM117611,32610,200x

Historical Results (1998-2005)

From the original Budmark benchmark page.

CPUClockRun Time (s)iter/secEfficiencyOS
P43800MHz1.270.7971.6%WinXP Cygwin
P43000MHz1.570.6473.7%Linux 2.6.11
P42266MHz2.290.4466.7%FBSD 4.4
P41800MHz4.230.2445.5%Win2K Cygwin
P31150MHz3.060.3398.4%OBSD 3.1
VIA Eden1000MHz8.070.1242.9%Win2K Cygwin
P3866MHz3.990.25100.2%Slackware
Celeron766MHz4.670.2196.8%FBSD 4.6.2
AMD-K7550MHz7.280.1486.5%RedHat
Celeron533MHz6.720.1596.7%FBSD 4.6.2
Xeon PII450MHz7.700.13100%FBSD 3.0
AMD-K6450MHz8.470.1290.9%FBSD 2.2.7
Xeon PII400MHz8.690.1299.6%FBSD
AMD-K6350MHz10.310.1095.9%FBSD
Intel PII333MHz10.750.0996.8%FBSD 4.6.2
AMD-K6300MHz12.640.0891.3%FBSD
Cyrix GXm233MHz32.310.0346.0%FBSD
Pentium166MHz38.210.02654.6%FBSD 2.1.0
IBM Power2135MHz38.610.02666.4%AIX XLC -O2
Pentium133MHz47.900.02154.4%FBSD
Pentium120MHz52.920.01954.5%FBSD
486DX266MHz115.460.008745.4%FBSD 2.2.7
486DX66MHz153.380.006534.2%FBSD 3.1
48633MHz230.420.004345.5%FBSD 3.1
386DX40MHz537.780.001916.1%FBSD 3.1
38640MHz784.510.001311.0%FBSD 3.1
38616MHz1997.800.000510.8%FBSD 3.1

Efficiency Analysis

Efficiency measures work-per-clock-cycle, normalized to Xeon PII 450MHz = 100%.

CPUClockEfficiencyNotes
P3 866MHz866MHz100%Peak efficiency era
Xeon PII450MHz100%Baseline
P4 3800MHz3800MHz72%NetBurst penalty
P4 1800MHz1800MHz46%Early P4 very inefficient
i5-45903300MHz~34,000%Modern IPC gains
Pi 5 A762400MHz~26,000%ARM efficiency

Key observation: The Pentium 4 (NetBurst) architecture traded efficiency for clock speed. A P4 at 3.8GHz was only ~6x faster than a Xeon PII at 450MHz, despite having 8.4x the clock speed.

Modern CPUs have recovered efficiency through:

  • Deeper pipelines with better branch prediction
  • Larger caches (L1/L2/L3)
  • Out-of-order execution improvements
  • Better memory controllers

Raspberry Pi Comparison

ModelCPUClockCoresPriceMulti iter/secValue (iter/$/sec)
Pi ZeroARM11761.0GHz1$51,326265
Pi 4Cortex-A721.5GHz4$3530,200863
Pi 5Cortex-A762.4GHz4$6073,0001,217

Pi 5 offers best performance per dollar for compute workloads.

All Pi models show near-perfect multicore scaling (3.75-3.96x on 4 cores).


Test Commands

Single-core test

gcc -O0 -o ecc4_original ecc4.c -lm
time ./ecc4_original 100000

Multi-core test (4 cores)

time (./ecc4_original 100000 & ./ecc4_original 100000 & ./ecc4_original 100000 & ./ecc4_original 100000 & wait)

Equivalent workload to 1998 Xeon (7.7s)

# On i5-4590: ~245,000 iterations
time ./ecc4_original 245000

Summary

EraBest CPUiter/secImprovement
1988386 16MHz0.0005
1998Xeon PII 450MHz0.13260x
2005P4 3800MHz0.791,580x
2014i5-4590 (single)32,63165M x
2023Pi 5 (multi)73,000146M x
2014i5-4590 (multi)121,500243M x

A $60 Raspberry Pi 5 is 562,000x faster than a 1998 enterprise Xeon server.


Benchmark and original data: John Sokol, 1998-2026
https://www.dnull.com/cpubenchmark/budmark3.html

Thursday, July 10, 2025

Hollow Core Fibers: A Leap Forward in Optical Technology

 

Hollow Core Fibers: A Leap Forward in Optical Technology

Hollow core fibers (HCFs) represent a groundbreaking evolution in optical fiber technology, offering a suite of advantages that overcome some of the fundamental limitations of traditional solid-core glass fibers. By guiding light through a central channel of air or a vacuum instead of a solid glass core, HCFs unlock significant performance benefits, poised to revolutionize telecommunications, high-power laser delivery, and sensing applications.

The Core Advantage: Light Through Air

The primary and most profound advantage of hollow core fiber is that light travels through its hollow center at approximately 99.7% the speed of light in a vacuum. This is a significant leap compared to conventional solid-core fibers, where light is slowed by about 30-31% as it passes through the glass medium. This fundamental difference gives rise to several key benefits:

Lower Latency

The most celebrated advantage of HCFs is their significantly lower latency. Because light travels faster in the hollow core, data transmitted through these fibers arrives at its destination quicker. This reduction in signal travel time is critical for applications where every microsecond counts, such as:

  • High-Frequency Trading: Financial institutions can gain a competitive edge with faster trade execution.

  • Data Center Interconnects: Reducing latency between data centers improves the performance of cloud computing and distributed applications.

  • 5G and Beyond: Future wireless networks will require extremely low latency for applications like autonomous vehicles and real-time remote surgery.

Reduced Nonlinear Effects

When high-intensity light travels through a solid medium like glass, it can induce nonlinear effects that distort the signal and limit the amount of power that can be transmitted. Since the light in an HCF primarily travels through air, the interaction with the glass is minimized. This leads to:

  • Higher Power Transmission: HCFs can carry significantly more optical power without signal degradation or damage to the fiber. This is a major advantage for industrial applications like laser cutting and welding, as well as in directed energy systems.

  • Improved Signal Quality: The reduction in nonlinear effects ensures a cleaner signal over longer distances, which is beneficial for high-bandwidth telecommunications.

Potential for Lower Loss

While traditionally, HCFs had higher signal loss (attenuation) than their solid-core counterparts, recent advancements have dramatically improved their performance. Researchers have now demonstrated HCFs with attenuation levels comparable to, and in some cases even lower than, the best solid-core fibers. In theory, HCFs have the potential for significantly lower loss because air is inherently more transparent than glass. Lower loss means:

  • Longer Transmission Distances: Signals can travel further before they need to be amplified, reducing the need for costly repeater stations in long-haul networks.

Broader Transmission Bandwidth

Hollow core fibers can guide a wider range of light wavelengths (colors) with low loss. This opens up the possibility of using more of the optical spectrum for data transmission, a technique known as wavelength-division multiplexing (WDM). A broader usable bandwidth directly translates to a higher data-carrying capacity for a single fiber.

Low Dispersion

Dispersion is the phenomenon where different wavelengths of light travel at slightly different speeds, causing optical pulses to spread out and overlap, which can corrupt the data. HCFs can be engineered to have very low chromatic dispersion, ensuring that the signal remains sharp and distinct over long distances.

Enhanced Environmental Stability

Hollow core fibers are also less sensitive to external factors compared to solid-core fibers. They exhibit:

  • Reduced Thermal Sensitivity: Temperature fluctuations have less impact on the signal traveling through the air core.

  • Radiation Resistance: The reduced interaction with the glass makes them more suitable for use in high-radiation environments like space or nuclear facilities.

The Trade-Offs

Despite their numerous advantages, hollow core fibers are still an emerging technology and currently face challenges in terms of higher manufacturing costs and greater complexity in splicing and connecting the fibers compared to the well-established solid-core technology. However, as research and manufacturing processes continue to mature, these barriers are expected to diminish, paving the way for wider adoption of this transformative technology.

Monday, August 28, 2023

simulate loss and latency on networks,

there is was package called dummy net that would simulate loss and latency on networks, what is there like this for linux today? 

Dummynet was originally developed for FreeBSD to simulate various network properties like bandwidth limitations, latency, and packet loss. For Linux, there are a few tools that can be used to achieve similar network simulations. The most notable one is tc (Traffic Control), which is part of the iproute2 package.

Here's a brief overview of how you can use tc to simulate network conditions:

  1. Latency: To introduce a delay of 100ms on all outgoing packets on the interface eth0:

    bash
    tc qdisc add dev eth0 root netem delay 100ms
  2. Packet Loss: To simulate a 10% packet loss on the interface eth0:

    bash
    tc qdisc add dev eth0 root netem loss 10%
  3. Combine Latency and Packet Loss: To introduce both a delay of 100ms and a 10% packet loss:

    bash
    tc qdisc add dev eth0 root netem delay 100ms loss 10%
  4. Bandwidth Limitation: To limit bandwidth to, say, 1mbit:

    bash
    tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit latency 400ms
  5. Removing the rules: To remove the rules and return to normal:

    bash
    tc qdisc del dev eth0 root

Additionally, there are other tools and utilities, such as:

  • Comcast: It's a tool that allows you to simulate bad network connections, so you can test your applications under various conditions.

  • NetEm: It is actually an underlying tool (used in conjunction with tc) that allows you to introduce various network impairments like delay, loss, duplication, and reordering.

Always remember, if you're going to apply any of these changes to a live system, especially one serving real users or critical functions, take care to ensure you understand the implications and have a rollback plan. Also, testing in a controlled environment first is always a good idea.




https://wiki.linuxfoundation.org/networking/iproute2

https://github.com/shemminger/iproute2




Comcast

Testing distributed systems under hard failures like network partitions and instance termination is critical, but it's also important we test them under less catastrophic conditions because this is what they most often experience. Comcast is a tool designed to simulate common network problems like latency, bandwidth restrictions, and dropped/reordered/corrupted packets.