Monday, 28 March 2022

What Is Printed Circuit Boards Fatal Flaw?

Have We Hit Peak PCB? 

This person thinks it's coming close to that point, and asked the question of what will happen next. If you're the slightest bit techie you'll already have some idea where this is going, you folks may as well skip the next few paragraphs.

What Are PCBs And Why Are They Inadequate?

Printed Circuit Boards (PCBs) are those generally green or orange things inside devices that are studded all over with electronic components and gizmos, and they are one of the oldest pieces of technology in that picture. 

About a hundred and some years ago it was cool to just mount electronic and electrical parts (usually with some good stout bolts and pieces of varnished wood because those suckers were generally large and heavy parts) wherever and then string wire between connection points. (When a computer less powerful than a school calculator took up two rooms, you know that those components were huge, and the whole technology for mounting and connecting them in a space-saving way was still a ways off.)

A (sort of) timeline of printed circuit boards


If you've ever been fortunate enough to peer inside a really old 'parlor radio gramophone console' you'd have seen the next step - a sheetmetal chassis, with strips of insulating phenolic material with metal tags attached to them, and the components (that were by then several orders of magnitude smaller) would be soldered between tags, and then the wiring soldered from one tag to another to connect those parts to form the needed circuit. 

Then came the next thing, being able to affix thin copper sheet to insulating sheets and etch it to the shape the wiring would have to take, then soldering the components to the copper in their respective positions. In effect smushing all those metal tags and wires flat onto a piece of insulating material and so turning the old console into a smaller and much flatter sheet, with just a few wires going to speakers, lights, and controls. 

This was better (for the manufacturer) than the console because it did away with the cabinetry required, the sheet metal chassis that the tag strips were attached to, and reduced the size of the cabinet to a much smaller flat package that needed less wood and cabinetry craftsmanship, and the most important thing of all - 

* It was soooo much cheaper to print and populate that board than it would have been to assemble a hodgepodge of tagstrips and parts and metal cases and huge old wooden cabinets because you could mass produce the circuit boards by that stage, parts had become smaller and consumed less current, and technology hadn't stood still so there were ever more inexpensive PCBs, small components,  and cabinets. 

So that's a PCB and how it came about.

Now To That Inadequacy:

Parts became smaller and packed more circuitry inside them. That increased their current consumption again, but allowed the designer place more functionality closer together. To place them closer together, the copper traces ("tracks") had to be made thinner because there were physically just more tracks needing to be squeezed into each square centimetre to carry all the signals and power to those smaller parts, and thus they couldn't carry as much current, had signal bandwidth issues in some cases, and PCB designers began to need and use more and more layers

In re: layers. It's become an art form to place tiny little circuits on a PCB so that they occupy as little space as possible, (so the PCB will fit inside your fitbit or smart watch or mobile phone, for example,) and connect parts with best-path trace routing, and still be able to supply the needed current and signal clarity and still be useable. Some PCBs are now also flexible, meaning more design constraints and more demands made of the PCB technology.

Layer Proliferation

I and other hobbyists routinely use a software program that can create a design for a PCB that I can then either manufacture myself using quite old technology - or I can send it to a PCB manufacturer who can produce a small stack of boards for me for peanuts. 

But if I have a difficult design where a chip has fifty inputs, ten outputs, and needs two voltages, then I'm going to run out of board space pretty quickly and will need several traces to cross over each another - which is impossible on a single-layer board. Initially, the bridging was accomplished by ending the track just before it touched the conflicting track(s) and then resuming it on the other side, and bridging the two points with an insulated wire.

However, every extra bridging wire needed to be cut to size and involved two extra soldered joints to be made, and if you had more than ten of them it started to add up to a fair amount of operator time, and so there then arose a need to design in effect TWO boards that are perfectly aligned and place them on opposite sides of a single board so that the endpoints of traces where a trace needs to cross over aligned, then it can go partly on top and partly underneath and a single via connects the top and bottom traces at an overlap spot. 

A 'via' was just a hole drilled at the time the board was manufactured, and the hole copper-plated through during manufacture so that it joined the trace on one side with its continuation on the opposite side, thus skipping under the bridging point. When that wasn't enough commercial PCBs routinely had 3 - or more - many more -layers inside the board which is printed in a specialised process layer by layer, allowing a lot more wiring to pass by each other. Some computer boards can have more than ten layers sandwiched in them.

These issues - of having to make traces thinner and thinner to pack them in (both on each layer, and also because unless the alternating layers of a board are really thin then the board becomes a plank) and the parts needing more and more current necessitating thicker traces which is s direct conflict of requirements - have come together and resulted in PCBs rapidly becoming as unuseable as point-to-point wiring. THAT'S what that author was referring to, and it's about to become a real issue. 

Because Then Came The Limits

Even with all the design improvements of the PCB, limits are being reached, small complex parts with a lot more signal leads (a CPU chip in your PC or laptop can have 200 pins underneath it)  mean it's getting harder to get by even with twenty layers, and some tiny boards (think the battery charging circuit in your mobile phone) need to carry several amps of current on a PCB that's thin and compact enough to fit inside a device that has to fit inside your pocket. . . 

Making equipment point-to-point started off being sufficient but as components got more complex, this technique hit a wall where the wiring took too much space, too much time, and as parts got smaller, led to difficulties fixing parts in place against the wiring weight. It just got plain unwieldy.

Using sheetmetal chassis and tagstrips worked for simple circuits up the a certain number of components and also hit the wall. You can pack so many parts between tagstrips but then at some point it becomes impossible to add another part without colliding / shorting out / ridiculously long lead lengths.

Single layer PCBs were the undisputed king of the heap for a decade or two before they were replaced by 2 layer, and then multilayer - and nowadays also flexible foldable multilayer - circuits. And now that technology too is hitting the limits of what can be done.

My "What If" Moment:

About the late 1990s, the industry was just seeing the rise of FPGAs (field-programmable gate arrays) which were a logic chip that wasn't specifically set up to solve a particular problem but instead consisted of a chip that could be field-programmed to have certain logical parameters and which meant that you could in effect customise the chip to a task.

More recent versions offer a way to create specialised configurations that are less power-hungry for performing some specialised tasks faster than anything except a custom-manufactured chip. (And this is why FPGAs are so useful, having a custom chip manufactured was generally a long and expensive process.)

But in the late 1990s, another term was also trending - nanotechnology. Nanotechnology was promising to develop little machines and materials and revolutionise industries and our lives. Sci-fi authors had already foreseen them coming and posited the famous 'grey goo' idea and of course, I was hooked

Between those things an idea came to me, and while I like to think outside the box, I'm not the only one in the world that does this, and I've generally found that if I have an idea, several other people around the world have already had it or are about to. 

This is a technology which might well see the next step up in technology from the PCB. Nanobots are becoming ever more possible and they could in theory perform the internal programming functions of a FPGA only to a much greater depth.

I was envisaging a CPU all inside a single epoxy or ceramic block with only a handful of connections needed - power, inputs, outputs, and arteries. 

Whoa. Back up there - 'arteries?'

Yep. Imagine a device very similar to a FPGA but with added flexibility - it can actually remove sections of itself and replace them as required with different sections. The chip can grow itself both in the flat plane and in the vertical thanks to some light silicone oil inside it, and this 'blood' would carry a stream of nanobots from a materials area to the active area. 

The materials area would have a collection of circuit modules that perform logical functions, nanomaterials for connecting and affixing them physically, nanobots, and two areas that are isolated, one for inbound new materials and one for outlet of nonfunctional or outdated components, nanomaterials, and nonfunctional nanobots.  

In effect, the chip would be upgradeable just by injecting new modules and bots and clean silicon oil, and allowing all the wastes to drain out. In this way it would become like a living system, able to take in 'nutrients' and 'excreting wastes' and consuming energy to do so. Unlike any living system though, it would be able to alter itself to the 'environment' it found itself in. 

Put it in an aircraft and it could add Inertial Movement Units to become aware of 3D motion it undergoes, a series of input channels so that it could take signals from the aircraft's existing guidance systems, and outputs to operate that aircraft. It could download the Operation Manual for the aircraft and all relevant regulations and geospatial data for airports, and you'd have a plane that could fly itself. Add redundancy by using dual systems (and adding several redundant dual systems on hot standby) and theoretically you'd never have an operator-induced aircraft incident ever again.

Put such a system into a spacecraft with sufficient "sustenance" suspended in silicone oil and a supply of energy from solar panels and you could send this spacecraft to the next solar system to explore or become our ambassador. Put it into your home automation system and it will specialise itself for your house and your habits and your needs.

Exactly how it was going to do that was just a kind of 'black box' in my theory at the time, because while AI (artificial intelligence) was a sci-fi staple, the smartest thing around was a really stupid chat-bot whose name I can't even remember. Back then, this step of micromanaging the internals of the chip was the big stumbling block.

Possible? Maybe Not Back Then...

Only . . .  We now do have tiny prototype nanomachines that can be 'programmed' to do a particular task, we do have nanomaterials that make nanowires and building blocks possible, we can build chips in sections or in one assembly, and AI has taken some huge leaps forward and is now diagnosing patients better than a human, identifying faces better than a human can, and taking control over sensitive and finicky industrial processes far more accurately than a human. 

An AI can be programmed to run millions of combinations of molecules and look for potential drug cures for many ailments, they're performing better than humans at working on COVID vaccination variants and detecting COVID and cancers in xrays and predicting stock market fluctuations and predicting weather and . . .  

You get the idea - for a limited narrow process, AI performs extremely well these days. We could let an AI evolve this chip idea in a simulation, evolve the guidance AI that this chip would need, the processes for making the building blocks, and every other facet of producing such a chip. 

The easy part would be to get an AI to design a 'living chip' such as I've envisaged, and designing an AI to inhabit that chip. The hard part, to me, would be to answer the question "Should we?"

Hi.
My name's Ted and I write a lot of these kinds of articles, also articles about home based recycling, sustainable energy and waste reduction, recipes, social and political commentary, and the occasional imaginative fiction story. 

If you enjoyed this article please send a link to your friends. Get acquainted with my other blogs and websites at Ted's News Stand, and perhaps give your friends the link to that too. To support me and my ability to keep these sites online and posting regularly, you can donate to me directly or send me the price of a coffee - or using the sponsored links when I use them in an article, because those will earn me a small commission, and not cost you a cent extra. 

No comments: