Summary created by Smart Answers AI
In summary:
- Macworld explains chip binning, a manufacturing process where Apple sorts processors by performance and disables faulty cores to maximize usable chips from production.
- This practice allows Apple to offer different performance tiers without designing new chips, using binned processors in devices like iPhone 17e and MacBook Air.
- Performance reduction directly correlates with disabled cores, such as the iPhone 17e showing 20% lower GPU performance due to fewer active cores.
Over the past several weeks, you’ve probably heard the term “binned” when referring to the chips inside the iPhone 17e and MacBook Neo. But what does it mean? In simple terms, “binning” is the process of taking one whole group of something and separating it out by characteristics to be sold or used differently.
Its origins trace back to agriculture, where a single crop yield would be separated into bins. The best pieces would be ideal for individual sale, and go in a bin destined for the market. Pieces that were not as visually appealing would go in a bin that would sell in bulk at a discount, for processed food products. The food that was worst in quality and appearance would go in yet another bin to sell for animal feed or fertilizer.
Today, “binning” is used in nearly every mining, harvesting, or manufacturing industry, from gemstones to clothing and, of course, semiconductors. If a RAM chip is tested and fails when run at a clock speed of 3000 MHz, it is binned and sold as a 2800 MHz chip, for example.
Every major chip manufacturer has employed “binning” tactics for years, including Intel, AMD, and Nvidia. But Apple has made the term more popular by using “binned” chips in popular products. Here’s how the process works and how Apple is using binned chips to its advantage.
The binning process explained
Processors, including Apple’s, are typically binned in two ways: Clock speed and design flaws. Chips are tested at frequencies and voltages, and separated out into those that pass validation at the desired speeds and others that operate at lower speeds.
Chip makers can then sell the fastest chips at a premium, or in Apple’s case, put them in higher-end products where top-tier performance is expected. Apple doesn’t disclose the frequencies of most of its chips, and the final speed at which the chip can run is very much reliant on the heat dissipation of the targeted device.
The more obvious method of “binning” is when some parts of a chip are disabled in order to rescue products that would otherwise have failed in manufacturing.
The iPhone 17e uses a “binned” version of the A19 chip with one fewer GPU core.
David Price / Foundry
Modern processors have tens of billions of transistors, etched onto a silicon sheet by shining high-frequency ultraviolet light through a “mask” of the circuit pattern. This is repeated layer after layer, and the precision required is incredible.
A typical silicon wafer—a big, round, flat crystal about a foot across—will produce around 500 chips like an A18, but a large percentage of them will have a flaw that prevents them from operating correctly. If Apple had to throw them in the trash, they’d get maybe 200 usable chips per wafer (or less). The percentage of usable chips is the “yield” of a silicon wafer. You pay for chip manufacturing by the wafer, so the higher the yield, the more usable chips you get out of it, and the lower the cost per chip.
Modern chips are designed with many areas that are repeated and functionally identical. If there are six GPU cores, each GPU core is exactly the same. This repetition can be used for redundancy in the manufacturing process, allowing manufacturers to make defective chips usable in other products..
With the right design, a chip could be made so that any GPU core with a manufacturing flaw in it can be “fused off” and ignored when running software. This can turn your broken chip with a 6-core GPU into a functioning 5-core chip. This technique can be used anywhere that large parts of the chip are repeated: CPU and GPU cores, cache memory, memory interface circuitry, and so on.
What Apple products have binned chips?
Binned chips have been used to power Apple products for about a decade. Back in 2018, the 3rd-gen iPad Pro arrived, which had a version of the A12 called the A12X. Where the A12 had a 6-core CPU and a 4-core GPU, the A12X chip featured an 8-core CPU and a 7-core GPU.
As we would soon learn, the A12X chip was actually designed with 8 GPU cores. Yields were bad enough that Apple had to disable one GPU core per chip to get enough usable chips per wafer to bring the costs in line. In early 2020, the fourth-generation iPad Pro featured the A12Z processor. It was the exact same chip as the A12X, but with that eighth GPU core enabled. Manufacturing yields had improved enough to make that possible.
The entry-level MacBook Air has used a “binned” version of the chip with one or two fewer GPU cores.
Ida Blix
When the M1 debuted in the MacBook Air, the chip featured 8 GPU cores. But the entry-level model had one GPU core disabled, giving Apple a lot more usable chips per wafer and bringing down the cost of the M1.
Today, Apple sells lots of products with binned chips. The iPhone Air uses the A19 Pro, just as the iPhone 17 Pro does, but one of its 6 GPU cores is disabled. The iPhone 17e uses a binned version of the A19—you get 4 GPU cores in the 17e while the regular iPhone 17 gets 5. The entry-level MacBook Air has an M5 with two GPU cores disabled (8 instead of 10). And the MacBook Neo uses an A18 Pro with one GPU core disabled.
Binned chips let Apple improve yields and lower chip costs. It also lets them produce less expensive products with lower-performance chips without having to design a totally new chip just for them. And as one of the only companies that make their own chips and their own hardware designs, it gives them a huge advantage.
How does binning impact performance?
If you’re using a product with a “binned” version of a chip, are you really missing out on the full experience? As so often is the case with the performance of computing products, the answer is: It depends.
All things being equal, a binned version of a chip suffers a hit to peak performance right in line with the change to the chip. If you go from 5 GPU cores to 4, that’s a 20 percent reduction in GPU cores, and you generally see a 20 percent reduction in peak GPU performance.
The iPhone 17e, for example, delivers GPU results roughly 20 percent lower than the iPhone 17, because it has 20 percent fewer GPU cores. The iPhone Air, with 17 percent fewer GPU cores than the iPhone 17 Pro, delivers graphics benchmark results around 17 percent slower.
But it’s not that simple. Few, if any, applications are limited only by the performance of one component. The binned version of chips goes into different products with different cooling, RAM speeds, maximum clock speeds, and other performance-altering characteristics. So the performance difference is never only the result of the one change in the “binned” chip.
As a good rule of thumb, consider that the worst performance degradation you’ll experience from a binned chip is equal to the reduction in the part. Going from 10 to 8 cores in the M5 will, at worst, cause a 20 percent reduction in performance, and only for those applications that are especially impacted by GPU throughput and not other things like CPU performance or RAM speed.
Apple could do more to make it clear that products with the same name may have very different performance characteristics, but chip binning isn’t a sneaky ploy to get you to pay more for less. But recycling chips with disabled parts to produce lower-performance variants is standard industry practice, and it gives Apple a huge advantage over its competitors that don’t control the whole manufacturing process.



