In December 1947, three physicists in Bell Telephone Laboratory - John Bardeen, William Shockley and Walter Brattain - expanded a compact electronic device with thin gold wires and a piece of Germanium, a material known as a semi -nose. Their invention, which was later referred to as a transistor (for which they were awarded the Nobel Prize in 1956), was able to reinforce and change electrical signals, which marked a dramatic departure by the bulky and fragile vacuum tubes that had previously powered electronics. The inventors did not pursue a specific product. They asked basic questions about how electrons behave in semiconductors and experimented with surface states and electron mobility in Germanium crystals. For months of attempt and refinement, they combined theoretical knowledge from quantum mechanics with practical experimentation in solid-state physics work, which many may have dismissed as too fundamental, academically or unprofitable. Her efforts culminated at a moment that now marked the dawn of the information age. Transistors usually do not get the credit they earn, but they are the foundation of all smartphones, computers, satellites, MRI scanners, GPS systems and artificial intelligence platform that we use today. With its ability to modulate (and delete) the electrical current at amazing speeds, transistors enable modern and future computers and electronics. This breakthrough did not result from a business plan or a product department. It emerged from open, curious research and development, which was supported by an institution that found a value in researching the unknown. For years it took an attempt and error to cooperate in the disciplines and the deep conviction that the understanding of nature - even without guaranteed payment - was worth the effort. After the first successful demonstration at the end of 1947, the invention of the transistor remained confidential, while Bell Labs submitted patent applications and continued the development. It was announced publicly at a press conference on June 30, 1948 in New York City. The scientific explanation followed in a pioneering paper published in the Journal Physical Review. How do you work? In essence, transistors are made from semiconductors - materials such as Germanium and later silicon - that can either guide or resist, depending on the subtle manipulation of their structure and load of electricity. In a typical transistor, a small voltage that is applied to part of the device (the gate) allows or blocks the electrical current that flows through another part (the channel). It is this simple control mechanism that has reduced billions of times with which your phone can carry out apps, return your laptop images and your search engines in milliseconds. Although early devices used Germanium, researchers soon found that silicon - more thermally stable, moisture -resistant and far more frequently - was more suitable for industrial production. In the late 1950s, the transition to silicon was underway and enabled the development of integrated circuits and finally the microprocessors that participate in today's digital world. A modern chip the size of a human fingern sagel now contains tens of billions of silicon transistors, each measured in nanometers - as many viruses. These tiny switches switch on and off billions of times per second and control the flow of electrical signals, which are involved in the calculation, data storage, audio and visual processing and artificial intelligence. They form the basic infrastructure behind almost every digital device used today. The global semiconductor industry is now worth more than half a trillion dollar. Devices that began as an experimental prototype in a physics laboratory now underpin the economies, national security, health care, education and global communication. But the original history of the transistor bears a deeper lesson - one risk, to forget. A large part of the fundamental understanding that moved the transistor technology forward came from federal research in the Federal States that the university research. Almost a quarter of the transistor research at Bell Labs in the 1950s was supported by the federal government. A large part of the rest was subsidized by revenue from the monopoly of AT&T in the US telephone system that flowed into industrial research and development. Inspired by the "Science: The Endless Frontier" from 1945, which Vannevar Bush wrote at President Truman's request, the US government began a long-term tradition of investing in basic research. These investments have paid constant dividends in many scientific areas - from nuclear energy to lasers to medical technologies to artificial intelligence. Generations of students who have been trained in fundamental research have emerged from university laboratories with the knowledge and skills with which existing technologies have to be driven beyond their known skills. And yet the financing of basic science - and for the upbringing of those who can pursue - is under the increasing pressure. The proposed federal budget of the new White House includes profound cuts at the Department of Energy and the National Science Foundation (although the congress may differ from these recommendations). The National Institutes of Health have already canceled or pause for more than 1.9 billion US dollars, while the NSF -MINT training programs of more than 700 million dollars have suffered terminations. These losses have forced some universities to freeze the admission for students, to cancel internships and scale the summer research opportunities - and make it more difficult for young people to pursue scientific and technical careers. At a time that is dominated by short -term metrics and fast returns, it can be difficult to justify research whose applications may not apply for decades. This is exactly the types of efforts that we have to support if we want to secure our technological future. Consider John McCarthy, the mathematician and computer scientist who shaped the term "artificial intelligence". In the late 1950s he led to one of the first AI groups and developed LISP, a programming language that was still used in scientific computing and AI applications. At that time the practical AI seemed to be far away. But this early basic work laid the basis for today's AI-controlled world. After the initial enthusiasm of the 1950s to the 1970s, the interest in neuronal networks-a leading AI architecture, which has been inspired by the human brain today, became reinforced by the so-called "AI winter" of the late 1990s and early 2000. Limited data, inadequate computing power and theoretical gaps made it difficult for the field to make progress. Nevertheless, researchers like Geoffrey Hinton and John Hopfield pushed further. Hopfield, now 2024 Nobel laureate in physics, presented his groundbreaking model for neuronal network in 1982 in a paper published in the proceedings of the National Academy of Sciences of the USA. His work showed the deep connections between collective calculation and the behavior of disordered magnetic systems. Together with the work of colleagues, including Hinton, who was awarded the Nobel in the same year, this basic research set the explosion of deep learning technologies that we see today. One reason why neural networks are now flourishing is the graphics processing unit or the GPU original for games, but is now essential for AI's matrix matric operations. These chips themselves are based on decades of fundamental research in materials science and solid -state physics: high -and -electrical materials, tense silicon alloys and other advances that make it possible to produce the most efficient possible transistors. We are now entering another border, researching Memristors, phase change and 2D materials as well as spinronic devices. If you read this on a phone or laptop, keep the result of a game that someone has made a curiosity. The same curiosity is still alive in university and research laboratories today - in the often unmistakable, sometimes obscure, which quietly lay the basics of revolutions that infiltrate some of the most important aspects of our lives in 50 years. In the leading physics journal, in which I see publisher, my employees and I see the tedious work and the commitment behind every paper we manage. Our modern economy - with giants such as Nvidia, Microsoft, Apple, Amazon and Alphabet - would not be able to grasp in advance without the modest transistor and passion for knowledge that the relentless curiosity of scientists such as those that made it possible. The next transistor may not look like a switch at all. It could arise from new types of materials (such as quantum, hybrid organic inorganic or hierarchical types) or tools that we have not yet imagined. However, the same ingredients will need: solid basic knowledge, resources and freedom to pursue open questions that are driven by curiosity, cooperation - and above all financial support from someone who believes that it is worth the risk. Julia R. Greer is a material scientist at the California Institute of Technology. She is a judge of the innovators of Technology Review under the age of 35 and former award winner (2008).
ai·7 min read8.9.2025
Why basic science deserves our boldest investment
Source: Original