I. Biology and Medicine
1 DNA – the self-copying and coding molecule
On April 23d, 1953, the world-renowned science magazine “Nature” published the very first paper on one of the most important discoveries of the 20th century. At the Cavendish Laboratory of Cambridge University in England, the American biochemist James D. Watson and his British colleague Francis H.C. Crick, had worked out the chemical structure of the DNA molecule – a double-stranded helix. They had correctly interpreted an X-ray diffraction diagram they had obtained from Rosalind Franklin, then working with Maurice Wilkins at King’s College in London.
DNA is the abbreviation for deoxyribonucleic acid. The backbone of its helices consists of deoxyribose molecules (a sugar with five carbon atoms and one OH-group less than the sugar ribose) alternating with phosphate groups, thus forming long, helical chains. The two DNA-helices are antiparallel to each other and are linked by pairs of stereochemically complementary nucleobases attached to the deoxyribose groups, thus forming the equivalent of steps in a spiral staircase. Four different nucleobases are involved: adenine (A), thymine (T), guanine (G) and cytosine (C). As their chemical structures are complementary, A always pairs with T, while G pairs with C. The linking of the nucleobases is mostly based on relatively weak hydrogen bonds.
The DNA structure answered some very elementary questions about the chemistry of life. On the one hand, DNA is a self-replicating molecule: opening the hydrogen bonds between the basepairs (which is the equivalent of opening a zipper) produces two single-stranded DNA molecules. In the presence of deoxyribose, phosphate as well as A, T, G, C and specialized enzymes, the complementary strand of each single strand is reconstituted very rapidly. The original DNA molecule is thus capable of self-replication.
But this is only half of the story: DNA can do much more. This molecule with a diameter of 2 nanometers, that makes a full 360 degree turn every 3.4 nanometers, is literally the key to life. The sequence of the four nucleobases is a code: three nucleobases (so-called triplet) indirectly define one of the 20 odd amino acids that are needed for the synthesis of protein molecules. The correspondence between triplets and amino acids is given by the so-called genetic code.
The genetic code is multiply redundant, as there are 64 triplets for the 20 odd amino acids needed for the synthesis of proteins. The properties of a protein are completely defined by the sequence of the amino acids it consists of. The folding into the three-dimensional conformation that gives a protein the formidable catalytic power of an enzyme is automatic (and still is imperfectly understood). Enzymes are “nanomachines” that accelerate even extremely complex biochemical reactions. Thus, the proteins coded on the DNA control the synthesis of all chemical structures in the living world, from bacteria to men.
Evolution is based on the fact that despite of highly refined correction mechanisms that are also based on specialized enzymes, errors are made in the pairing of nucleobases. In addition to that, the DNA in each cell of our body is damaged anywhere between 10 000 and a million times a day by radiation and mutagenic agents. Most of those mutations are repaired; if not, a slightly different protein will result, which may be anything from irrelevant to deadly. Ocasionally, such an “erroneous” protein has a slight advantage when it is confronted with the environment. This is where it is subjected to selection, as Darwin suggested (who, of course, had no idea of the genetic code): if it does a superior job, the mutant eventually becomes the dominant variant.
In the cells of so-called eucaryotes (to which animals and plants belong), the cell nucleus hosts most of the DNA where it forms chromosomes that consist of linear arrangements of DNA segments called genes: they are the carriers of hereditary information. The individual genes are separated by sometimes very long, non-coding segments. A thread of DNA may have a length of several centimeters; it is wrapped around the so-called nucleosomes.
As a first step of the DNA-controlled protein synthesis, the double helix must be split open, which allows the transcription of the DNA-nucleobase sequence into that of the single-stranded, spiral-shaped ribonucleic acid RNA. In the RNA, deoxyribose is replaced by ribose, thymine by uracil. The RNA molecules act as messengers that diffuse from the cell nucleus into the surrounding cytoplasm where protein synthesis takes place in specialized organelles called ribosomes. The information storage capacity of DNA is dizzying: one gram of it stores the same amount of information as a trillion compact discs.
It is worth mentioning that Watson and Crick who elucidated the DNA and RNA structure, were the first persons who actually became wealthy thanks to DNA. Indeed, they split the Nobel Prize for medicine that was worth a million dollars with Maurice Wilkins in 1962. Rosalind Franklin, who definitely would have deserved a share of the prize, had died of cancer four years earlier.
2 Genetically modified organisms
All the proteins needed for assembling, developing and maintaining an organism are specified according to the genetic code on stretches of DNA called genes (see the preceding chapter). As this became fully clear in the 1950s, molecular biologists immediately started to speculate on what would happen if the genes were modified so they would code for somewhat different proteins. Furthermore, it was apparent from the very beginning that it should be possible to splice foreign DNA into the genes of a guest organism, thus conferring it new and/or improved properties.
Inevitably, nature invented this stratagem long ago, giving it the possibility of enriching a given genome with ready-made coders for new and potentially useful proteins almost instantly. This is much faster and more efficient than relying on randomly occurring mutations caused by pairing and transcription errors that can only modify a protein by a single amino acid at a time. Such mutations were used for decades in plant breeding; they were induced either by chemicals or radiation. Improved varieties were indeed found, but it took ten thousands of trials each time.
Inserting known genetic material into the genome of another organism opens new horizons: referred to as lateral gene transfer, it yields so-called recombinant DNA. Bacteria do it quite readily among themselves way across the species border: this is one of the reasons for the rapid spread of genes conferring antibiotic resistance. It is probable that lateral gene transfer significantly influenced the early stages of evolution; it still plays an important role among single-celled organisms.
Several mechanisms are available for the process of gene transfer; for example bacterial DNA may be moved (or transducted) by a virus or exchanged by close cell-to-cell contact of two bacteria. It is almost certain that bacteria are capable of transferring DNA to fungi, arthropods, nematodes and even beetles. On the other hand, fungi are known to have transferred genes to aphids (the so-called plant lice). Horizontal gene transfer most probably also affects higher plants and animals; traces of this process were found in many instances by gene analysis. Of course this happened in a totally random and unpredictable way, but we see only those instances that brought an improvement.
Man can further improve this process and genetically modify an organism purposely and directly, with greater efficiently and speed than nature could. This is what is called genetic engineering, the main purpose of which is the directed transfer of a gene or several genes. This DNA segment is introduced into the target cell nucleus by bombarding it with tiny tungsten particles coated with the DNA to be transferred, or directly with a micro-syringe. But natural vectors, e.g. viruses, work best.
A particularly useful application of genetic engineering is the insertion of the human insulin gene into bacteria, to make them produce insulin for the treatment of diabetes. Patients thus do not need to rely on animal insulin any more: it has disadvantages as it does not fully match the human variety. Other applications are the synthesis of human growth hormone, blood clotting factors, vaccines, diagnostics and enzymes. The best known applications of genetic engineering are centered on herbicide resistant food crops such as soybeans, corn, sorghum, canola, alfalfa grass and cotton. Insect resistant, genetically engineered crops generate the insecticidal protein of Bacillus thuringiensis, by themselves, thus making the formerly widespread spraying with the bacterium unnecessary.
In the early 1970s, scientists involved in the development of genetic engineering started to get nervous. Could organisms carrying recombinant DNA and/or the products of their new genes be dangerous for humans or the environment? The probability was small, as pathogens and toxins do not appear instantly, but have to slowly adapt to new hosts. Yet, in order to be on the safe side, the participants of the Asilomar conference of 1954 decided to impose a voluntary moratorium on potentially dangerous recombinant DNA research.
Later, the National Institutes of Health (NIH) issued official guidelines for recombinant DNA work. Forty years of practical experience later, recombinant DNA and the proteins it codes for are considered as safe. Yet, concerns remain about recombinant DNA in the environment and particularly in food, mostly in Europe. This is where environmental groups managed to alarm the population to the point that foods from genetically modified crop plants are unsellable and thus are not available commercially, even though they have been declared as safe.
Americans are more pragmatic: the Food and Drug Administration decided that genetically modified foods are not more hazardous than natural foods and do not need to be specifically labeled. For several decades, Americans by the hundreds of millions have been eating genetically modified soybeans, corn etc. without the slightest ill effect. Europeans on the other hand just panic at the mere mention of such foodstuffs. This may be called placebo or rather nocebo power.
3 Gene sequencing
It has become common knowledge that DNA is the information storage medium for the amino acid sequence of proteins. What is actually stored on the DNA sections called genes is a sequence of the four nucleobases Adenine (A), Thymine (T), Guanine (G) and Cytosine (C), each threesome of which (a so-called triplet or codon) is coding for a given amino acid during the synthesis of proteins in the cellular organelles called ribosomes. As a protein is a linear chain of amino acids, it is unambiguously defined by the sequence of nucleobase triplets that code for it, most amino acids being coded by several different triplets. Specific codons indicate where the translation into protein should begin and where it should stop.
The nucleobase sequence within a DNA molecule is thus of fundamental importance for molecular biology: it is one of the basic features of life. However, the terminology may be a bit confusing. As was mentioned above, the backbone of the DNA double helix consists of alternating phosphate and deoxyribose groups (it is ribose in the single-helix RNA). The four different nucleobases are attached to the sugar moiety in the helix; the nucleobase-sugar complex is called a nucleoside. On the other hand, the building-block of DNA and RNA consists of a nucleobase, the five-carbon sugar deoxyribose or ribose respectively, and a phosphate group; this building-block is called a nucleotide. As the phosphate-sugar backbone to which the nucleobases are attached is a feature of any DNA or RNA molecule, the relevant sequence is that of the information-carrying nucleobase triplets.
For this reason it does not really matter whether one speaks of a nucleobase, nucleoside or nucleotide sequence. The important point is that this sequence is a kind of fingerprint, unambiguously defining any organism, from bacterium to man. Yet, no two nucleotide sequences within any species (evidently including Homo sapiens sapiens with the possible exception of identical twins) are really identical. The difference is smallest when two individuals are closely related; this way, identifying the father in a paternity suit or a criminal having left sequencable traces has become quite easy. The precondition was the development of a relatively simple and not too costly way of determining the nucleotide sequence in a given segment of RNA or DNA.
This turned out to be much easier said than done. The first success came in the early 1970s, as the complete, RNA-based genome of bacteriophage MS2 was published by Belgian scientists led by Walter Fiers in Ghent. It took five more years until a team of British molecular biologists headed by Frederick Sanger in Cambridge succeeded in sequencing the DNA-based genome of another bacteriophage called ΦX174. This virus infects certain bacteria and uses their cellular “machinery” to generate many copies of itself. The DNA of ΦX174 consists of 5375 nucleotides; the chemical formula for the complete genome is C52605 H6174 O32642 N19569 P5375.
The early sequencing methods were extremely laborious, slow and expensive; they involved two-dimensional chromatography or marking with radioactive carbon (C-14). The standardized method that is presently used worldwide is based on splitting open the DNA by heating it in order to obtain single strand samples. If only trace amounts of DNA are available, it is necessary to make many copies by inserting the sample into a bacterium, multiply the latter and then extract the DNA to be sequenced from the bacterium’s genome. The alternative is the so-called polymerase chain reaction or PCR that uses an enzyme called polymerase. This method also relies on generating two single-stranded DNA halves by heating: after cooling, those halves serve as templates to generate two full copies that are again split… etc. In this manner, hundreds of billions of copies of just one DNA molecule can be obtained.
The next step in the sequencing process is to cut the DNA into manageable pieces with specialized enzymes. Those fragments may consist of just a few nucleotides or up to several tens of them. Four fluorescent dyes are then added: they specifically attach to one of the four nucleic acids A, T, G or C, respectively that happens to be located at the end of the chain; each of those dyes yields a different color in ultraviolet light. Subjecting this mixture to gel-electrophoresis separates the components according to molecular weight and electrical charge; it yields a gel plate with colored fluorescent bands that can be read off automatically. This is done with DNA fragments of different length that necessarily overlap at the ends. The computer can then reconstruct the basepair sequence of an entire chromosome with hundreds of millions of basepairs.
Automatic sequencing machines appeared in 1986 and were rapidly improved; this triggered the development of a new branch of computer-science: bio-informatics. The monumental Human Genome Project started in 1990 with the aim of identifying and mapping all the genes in the human genome (3.3 billion basepairs in all); it was finished in 2003, two years ahead of time. It cost 3 billion dollars; today, anybody with a spare 1000 dollars can have his genome sequenced.
4 Ultrasonic imaging and ultrasound therapy
“Sound” means mechanical vibrations in solids, liquids or gases (including air of course) in the audible frequency range, i.e. about 16 Hertz to 16,000 Hertz (or cycles per second, Hz). Inaudible frequencies above 16,000 Hertz (or 20,000 Hertz, depending on the context) are defined as ultrasound. Very high frequencies above 1 GHz (one billion Hertz) all the way to 10 THz (10 Terahertz) are called hypersound; below 16 Hz one speaks of infrasound.
Many animals such as dogs, cats, rats, dolphins and bats would not agree with our anthropocentric definition of sound and ultrasound, as they easily hear much higher frequencies than we do. Bats navigate and hunt insects in total darkness without ever colliding with anything. They are endowed with a kind of ultrasonic radar with an upper frequency limit around 100,000 Hertz.
Ultrasonic waves are strongly attenuated in air, but easily propagate in liquids. In that case the intensity sets limits: at high intensities, liquids are literally ripped apart: bubbles are formed and collapse, which is called cavitation. In this process, very high pressures and temperatures appear, sound and light may be emitted.
The development of ultrasound technology was greatly accelerated by World War II, as ships and submarines had to detect the enemy on the high seas und communicate with each other. At that time, the generation and reception of ultrasonic waves depended on magnetostrictive metal alloys. Piezoelectric materials such as barium titanate or lead zirconate turned out to be much superior. They transform electronic signals into mechanical vibrations of the same frequency (and vice-versa) with a minimum of losses.
Ultrasound has found a great many applications in technology. With ultrasound one may weld, drill, polish, lap, clean, atomize, sterilize and detect inclusions, cracks and other defects in any material. Ultrasonic devices are also used for the continuous monitoring of liquid levels and liquid flow.
Ultrasound has become indispensable in medicine. Everybody has experienced the ultrasonic device dental hygienists use for removing the tartar on our teeth. Such instruments are not particularly agreeable for the patient, but they work much faster and more efficiently than manual scrapers. In the case of bone fractures and dislocations, ultrasound therapy blunts the pain and relaxes muscles.
The extremely widespread cataract operation involves breaking down the clouded natural lens with ultrasound and sucking the fragments off before an artificial lens is implanted. Focused ultrasound is used for fragmenting kidney, gallbladder and urinary bladder stones so they can be evacuated the natural way.
The imaging procedures of sonography are quite spectacular. They are based on ultrasound in the frequency range of 2 to 20 Megahertz. Thanks to sonography, the physician can non-invasively visualize any part of the body under the skin. His tool is a handheld probe with a piezoelectric transducer that directs ultrasound pulses into the examined area. The body tissues absorb, reflect, diffract and/or refract ultrasonic waves in different ways. The echo signal is received by the probe in the short time between the pulses emitted by the transducer: this procedure was developed in the 1930s for radar.
The computer analyzes and processes the intensity and the delay times of the echo signal in real time. The screen then normally shows a two-dimensional cross-section of the examined tissues such as bone, muscle, tendons, blood vessels, brain, thyroid gland, heart, lung, stomach, liver, spleen, gall bladder, pancreas, kidneys, lymph nodes, intestines, urinary bladder, ovaries, uterus, prostate etc. Specialized techniques based on the Doppler effect are available for examining the direction, intensity and rate of blood flow in any organ.
Three-dimensional imaging is also possible, but it takes more computer time. If required, the computer will also generate color in function of the echo intensity. An ultrasound screening is quite inexpensive and very often is all that is needed for a correct diagnosis. If tumerous growth is suspected, the physician will resort to much more expensive imaging techniques providing higher resolution, such as computer tomography (CT) and magnetic resonance (MRI).
Sonography has become routine for checking pregnancies and following the development of the fetus, without any exposure to ionizing radiation. After the 26th week, the sex of the child may be determined visually by focusing on its genital region. General-purpose medical sonography equipment suffices in most cases. However, specialty transducers may be placed in the rectum, the vagina or the esophagus. Very small transducers can even be placed in blood vessels with a catheter.
5 Computer tomography for bloodless cuts
In 1917, the Austrian mathematician Johann Radon (1887-1956), a professor at the Vienna Technical University, published an article that he called an “intellectual plaything”. Half a century later the so-called Radon Transformation would become the base of computer tomography (or CT) and related procedures. Indeed, Radon had shown that the intensity of various rays having travelled through a given specimen can be used to calculate the density distribution in that specimen. However, the necessary calculations are so complicated and lengthy that practical applications were near impossible before the age of computers.
In a classical medical X-ray, contrast is due to the varying absorption of different tissues within the irradiated area. It is a projection comparable to a shadow, in which information from the third dimension is lost. We had to wait until the early 1970s until CT made it possible to image any part of the body as a succession of thin slices that by themselves are most informative. If needed, the computer can combine them to form a fully three-dimensional depiction that includes any kind of soft tissue.
Radon had developed the necessary mathematical tools, but CT and related technologies such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) only became medical routine thanks to the spectacular development of computer technology. The South African-American physicist Allan M. Cormack must be cited as an early pioneer, while the very first CT machine was designed and built by an electrical engineer by the name of Godfrey N. Hounsfield working for Electric and Musical Industries Ltd. (EMI) in Great-Britain.
This was a rather surprising diversification, as EMI had become famous and prosperous for its recordings of the Beatles. However, the company indulged in the luxury of a well-endowed research laboratory covering many fields of physics in Hayes (Herefordshire, GB). The first CT image was recorded in 1971 and clearly showed a cyst in the brain of the test-person. Eight years later, Cormack and Hounsfield were awarded the Medicine Nobel Prize for the development of imaging techniques visualizing soft tissues with only minor X-ray exposure.
The CT scanner opened new horizons in medicine, as it made diagnostics so much easier. At last the physician was able to visualize any part of the body in a totally non-invasive, literally bloodless way in three dimensions and at high resolution. Very soon, every respectable clinic or hospital had to have its own CT scanner. This created strong incentives for the development of ever more sophisticated CT and less expensive machines. The most important makers of medical instruments bought licenses of the EMI-technology and perfected it in spectacular ways.
In the early CT scanners the X-ray tube was mechanically linked to the detector, images were obtained by rotating and moving this system around and along the axis of the patient’s body, respectively. In the next generation of CT machines, the X-ray tube emitted a fan of radiation that covered the patient’s long axis, so the tube only had to be rotated. A large number of detector cells recorded the intensity values; the computer processed them into the desired image.
Finally, rotation was given up completely: the patient is now slid into a ring of tungsten (the usual target or anticathode of X-ray tubes) in an evacuated torus and a ring of detectors. The electron beam emitted by the cathode can be directed electromagnetically to any position on the tungsten ring where it generates X-rays. In order to shorten the imaging time, the toroidal X-ray tube is equipped with several electron guns. Machines of this kind are very expensive, but with them, CT scans can be recorded within milliseconds, which is quite important in cardiology for investigating the beating heart.
Development of imaging techniques did not stop with X-ray tomography. Soon it was elegantly complemented with nuclear magnetic resonance (NMR-tomography, see chapter 6) that is usually called magnetic resonance imaging or MRI. This technology too could not have been implemented without the Radon Transformation. In MRI, no ionizing radiation is involved, but the patient has to endure strong magnetic fields and rather horrible noises. As MRI technology is sensitive to hydrogen atoms, organs and soft tissue (that consist of hydrogen-rich compounds or water), are imaged with a high resolution.
The interpretation of Positron Emission Tomography (PET) signals also relies on the Radon Transformation. In this procedure, the imaging radiation is generated in the patient’s body by administering a radioactive isotope that emits positrons, i.e. positively charged electrons, for example fluorine 18 with a half-life of 1.87 h. Those positrons are immediately annihilated by the ubiquitous electrons, which results in the emission of two long-wave gamma photons in opposite directions. The structures calculated on the base of the absorption of those photons are particularly well suited for imaging metabolitic processes. PET thus is used in tumor diagnosis, neurology and cardiology.
6 MRI – “seeing” with magnets
The abbreviation MRI (Magnetic Resonance Imaging) has gained general acceptance for nuclear magnetic resonance (NMR) tomography. This very important medical technology yields images of two-dimensional slices of any part of the body. In this instance, no X-rays or any other ionizing radiation are needed; the procedure is based on magnetic fields and VHF-radio waves.
Interesting enough, nuclear magnetic resonance first established itself as an indispensable tool for physical and chemical research; it was introduced into medical practice considerably later. Quite generally, NMR is based on the fact that the nuclei of certain isotopes of many elements in the Periodic Table are characterized by a so-called spin, which results in a magnetic moment. This condition is fulfilled – among many others – by the hydrogen nucleus and by the nuclei of biochemically central elements such as carbon, nitrogen and phosphorus.
The common denominator of those nuclei is a magnetic dipole that makes them behave like tiny compass needles. If a magnetic field is applied to a sample of material containing isotopes with a magnetic dipole, the nuclear magnets try to orient themselves in the direction of the field. Only those nuclei succeed, that are already more or less correctly oriented. All others will only be slightly deflected but start to rotate around the direction of the field. This is comparable to a toy top that was given a push on the side which makes it tumble around its axis of rotation.
This phenomenon is called precession; in the case of atomic nuclei the precession frequency depends on the strength of the applied magnetic field and the magnetic properties of the dipole nuclei. It usually corresponds to ultra-short radio waves (VHF). In a magnetic field of 1 Tesla, protons precess with a frequency 42.58 MHz which corresponds to wavelength of about 7 meters.
If VHF radiation is applied perpendicularly to the static magnetic field, the nuclear spin is reversed, which absorbs energy. The maximum absorption occurs at the precession frequency: this is a typical resonance effect. For this reason, the process is called nuclear resonance, which is not quite correct; nuclear spin resonance is the better name.
The above-mentioned resonance frequency is strongly dependent on the interactions of the tumbling nuclei with neighboring atoms. This effect is the base of nuclear resonance spectroscopy: scanning through the VHF frequency range, a series of maxima and minima is recorded: their position and amplitude yields important information on the atoms and molecules in liquids and solids. This procedure is extremely important for biochemistry, as it allows the determination of the three-dimensional structure of proteins, even in solution. The structure determination of proteins usually relies on X-ray diffraction diagrams of a crystallized sample in which the folding of the amino acid chain may not exactly coincide with the folding in solution, where proteins actually function.
If the high frequency field is switched off, the spinning nuclei radiate the energy that was absorbed for precession and try to orient themselves in the direction of the static magnetic field. For doing this, a materials-specific relaxation time is required: it too depends on the chemical bonds of the atoms with the precessing nuclei and with their surroundings.
During the 1970s, the imaging nuclear resonance or MRI became very important in medicine. It is based on the nuclei of hydrogen atoms that are ubiquitous in all biochemical compounds and body tissues. MRI requires a very strong, inhomogeneous magnetic field that is generated by superconducting electromagnets. The resonance frequency then strongly depends on the position of the hydrogen atoms.
The distribution of the resonant nuclei can be obtained by applying the inhomogeneous magnetic field from several directions. In this manner, two-dimensional sections or even three-dimensional images can be obtained: they provide the physician with very important information on the investigated body tissues or organs. The image contrast strongly depends on the concentration of hydrogen atoms, which varies from tissue to tissue (e.g. muscles and bone). In this context, the above-mentioned relaxation time of the excited hydrogen nuclei also influences the MRI contrast.
In order to record NMR-tomograms, the patient must be slid into a narrow tunnel around which the superconducting magnets are arranged; they are cooled with liquid helium. Understandably, the tunnel is not very popular, particularly with people suffering from claustrophobia. Tunnels with a bigger diameter bring a measure of relief; some of them are open on the side. Another problem is the noise to which the patient is exposed: it is due to the switching of the magnetic field; earphones playing music are useless in this case, as the machine is much louder.
7 The heart pacemaker – new life for very sick patients
In the course of the 19th century, it finally became clear that the heart is not the seat of the human soul, but a system of muscles and valves that pumps the blood around the body. At that time it was also realized that the heart’s contraction is triggered by electric pulses, the frequency of which is constantly adjusted in function of the body’s oxygen needs.
The sinoatrial node (also called sinus node) is the impulse-generating tissue; it is located in the right atrium of the heart and has about the size of a pea. In the average, it emits electrical pulses about 70 times a minute and thus controls the heart’s contraction. It is fully autonomous and is not connected to the brain. Many diseases are due to the full or partial failure of the sinoatrial node or faulty propagation of the nerve impulses originating there.
The early symptom of this defect is cardiac insufficiency: arms and legs are not perfused with enough blood. This gravely compromises the quality of life, as even minor efforts lead to dizziness and shortness of breath. Later on, severe diseases may appear, such as kidney failure, edemas and hypertrophy of the heart.
In such cases it was obvious that the deficient, natural electrical stimulation pulses should be replaced by externally generated ones. An isolated wire with a platinum-iridium tip can easily be introduced into the heart through a vein and is then anchored in the right-hand side of the heart. Occasionally, electrodes must be implanted into both chambers of the heart in order to assure their synchronization.
Yet, this relatively simple procedure was fraught with problems. It is impossible to install an electrical socket on the surface of the body. Any opening of the skin acts as a door for dangerous germs: the skin itself is infected and eventually necrotizes, which has very disagreeable consequences. The early external pulse generators tested in the clinic in 1932 weighed several kilograms, needed a connection to the grid and had to be ferried around by the patient on a wheeled cart. This was not conducive to a good quality of life.
After the invention of the transistor, heart pacemakers were miniaturized in the early 1950s to the point that they could be attached to the chest of the patient with adhesive tape. Evidently this was only a temporary fix: the only permanent solution was to implant the pacemaker in the patient’s body. After extensive tests with animals, a usable device became available in Sweden in 1957; its diameter was 55 mm, the height 16 mm.
The heart surgeons Senning ad Elmqvist successfully tested this pacemaker in humans. At first, only patients who needed a temporary heart-stimulation after operations were given pacemakers. But it soon became clear that a permanent pacemaker enormously improved the quality of life of patients suffering from heart block, cardiac arrhythmya or heart insufficiency.
The early heart pacemakers were powered by a nickel-cadmium battery: as its capacity was rather limited, it had to be recharged on a weekly base, which was done inductively through the skin. This worked fine as long as the patient remembered the recharging ritual: if he forgot it, he was in mortal danger. The next generation of pacemakers was equipped with non-rechargeable mercury oxide – zinc batteries that generated current pulses during two to three years.
After that relatively short time, another operation was necessary in order to replace the pacemaker; the electrode usually stayed in place and could be reused. Then came the plutonium-powered thermal batteries; for all practical purposes they lasted indefinitely, but had to be removed after the patient’s death. In 1972, lithium-iodine batteries were a real breakthrough: they last for ten to fourteen years. If the patient is still alive after that time, the pacemaker can be replaced by a simple operation; the electrode is left in place. Pacemakers are usually implanted in the chest under the pectoralis major muscle.
Modern medicine is unthinkable without the cardiac pacemaker: in very many cases it avoids invalidity and the nursing home. Thanks to the microprocessor, electrical pulses are generated on demand, the frequency being adjusted to the current level of physical activity. Important parameters such as amplitude, pulse energy and stimulus threshold are set individually for every patient and may be adjusted from the outside with magnets.
Year after year, about a quarter million heart pacemakers are implanted worldwide, the vast majority of the recipients being elderly. Some 200,000 special models with the defibrillator function must be added to the total. The heart is constantly monitored: should uncontrolled fibrillation occur – a very dangerous heart-rhythm disturbance – the defibrillator re-establishes the normal heart activity with a strong current pulse.
8 Antibiotics – life-threatening life savers
Everybody understands that human life is much more precious to us than the life of microorganisms, even though the latter function in nearly the exact same way as we do. This proves that life appeared only once on the planet and evolved by mutation and selection, the way Darwin and Wallace suggested more than 150 years ago. We particularly dislike pathogenic bacteria: they have to be fought or at least held in check by exploiting the subtle biochemical differences between bacteria and men.
Actually, pathogenic and thus unwanted bacteria can only be fought by using the few available points of attack. The most important ones are the structure and permeability of cell membranes, the copying process of the genome (i.e. DNA or RNA), the synthesis of important proteins and the bacterial metabolism. Since the middle of the 20th century, powerful substances are available for those purposes, they are called antibiotics. The Scottish bacteriologist Alexander Fleming rates as the “father” of this new class of drugs. Experimental batches of Penicillin that Fleming had accidentally discovered in 1928 saved the life of many wounded British soldiers during World War II.
Yet, folk medicine based on purely empirical observation, had made good use of antibiotics many centuries before Fleming. Extracts of molds had been routinely used for fighting infections: it was generally known that fresh soil that is rich in molds can help the healing process of infected wounds. And in 1893 the Italian physician Bartolomeo Grosio discovered that a purified extract of molds of the Penicillium species strongly inhibited the growth of the anthrax bacillus. Unfortunately, Grosio published his findings in an obscure Italian medical journal that nobody read outside of Italy.
Just a few years after Grosio’s discovery, a 23 year old French army doctor by the name of Ernest Duchesne investigated a fact that was well-known among horsemen: moldy saddles considerably accelerate the healing of wounds on the back of the animals. Duchesne prepared an extract of the saddle-molds: it rapidly healed guinea pigs that he had been infected with bacillus coli and salmonellae. But those findings were not in tune with the preconceived ideas of the medical establishment and were simply ignored. In 1949 he was finally honored by the French Medical Academy for his pioneering work in the field of antibiotics – alas, posthumously, as Duchesne had died of tuberculosis during World War I.
The priority for the discovery of antibiotics thus certainly cannot be attributed to Nobel Prize winning Fleming. Yet, in the 1940s, he opened the floodgates to antibiotics research. Soil samples were now collected everywhere in the world from the tropics all the way to the poles. Extracts of the molds and microorganisms isolated from those samples were carefully checked for antibiotic effects. Many things were found: more than 8000 substances with antibiotic properties are presently known. However, only about one percent of them were suitable for medical applications. Aside from Penicillin, the best-known antibiotics are Cephalosporin, Erythromycin, Rifampicin, Streptomycin, Tetracyclin, Trimethoprim and Vancomycin. The Chinolones are fully synthetic antibiotics, their name always ends with -acin.
Antibiotics absolutely revolutionized the therapy of infectious diseases. So far, many of them had all too often resulted in the death of the patient. Tuberculosis is a classical example, and rated as endemic. Now it could be cured within a few weeks or months with a combination of antibiotics such as Streptomycin and Rifampicin. Within a few years, tuberculosis was on the verge of disappearance.
This did not happen: very soon, physicians were confronted with the phenomenon of resistance. The probability is not negligible that within a given population of bacteria a subpopulation exists that is resistant to antibiotics due to genetic mutations. If the population that responds to antibiotics is killed off, only the resistant subpopulation is left. It will very soon fill the empty niche, as bacteria are capable of doubling their mass within 20 to 30 minutes. Thus, a new population establishes itself that is fully resistant. A classical example of Darwin’s principle of mutation and selection by the environment.
Spontaneous mutations that occur at the rate of 1 to 10 million may also result in resistance. Furthermore, the complete set of resistance genes may be transferred horizontally from harmless bacteria to disease-provoking species. For this reason, antibiotics that inhibit just one enzyme soon lose their activity.
On the other hand, preparations that make good use of several points of attack may have a very long life. Penicillin is a good example: it has been known for 85 years and still is one of the most prescribed antibiotics. It has the enviable property of attacking bacteria on six different biochemical pathways. Yet, pharmaceutical research is forced to constantly develop new antibiotics, as once powerful preparations become nearly useless.
9 Virostatics – biochemical inhibitors of viruses
Viruses are such strange biological structures that they cannot really be classified as living beings. By definition, the latter are characterized by a metabolism that supplies it with energy as well as by the property of self-regulation and reproduction. Most of those criteria do not apply for viruses: their metabolism and reproduction rely on a host cell. Inside the latter, a virus is just a piece of nucleic acid (RNA or DNA); it uses the enzymatic machinery and energy resources of the host for its own reproduction.
This is quite remarkable: the viral nucleic acid actually stores information that forces the host cell to make many copies of the guest’s RNA or DNA and to let it develop into complete viruses, so-called virions. The latter eventually make their way out of the cell: from now on, their only purpose is to find a new host cell, so the above cycle can start all over again.
The most important feature of the virion is its genome; it may consist of the double helix deoxyribonucleic acid (DNA), or of single-stranded ribonucleic acid (RNA). It is normally enclosed in a protein coat (the so-called capsid) that may assume the shape of a simple polyhedron, e.g. an icosahedron. Occasionally, the protein coat is itself surrounded by a lipid double layer in which membrane proteins are distributed. Certain types of viruses, e.g. the 0.3 micrometer long, rod-shaped tobacco mosaic virion (a single stranded RNA virus in which the tube-shaped protein capsid encloses the 6400 base RNA helix); agglomerate to form three-dimensionally ordered systems with the properties of a crystal.
Viruses are transmitted as crystals and viroids, but definitely cannot be considered as living beings in those forms. One may argue that they are alive in the host cell that they need for their own reproduction. Under those circumstances, biochemical reactions do occur that are typical for life, more particularly the copy of the genome and the synthesis of proteins that are coded on the nucleotides. Yet, the virus is quite incapable of those feats by itself: it must infect a host cell and use it for its own reproduction. In this perspective, viruses may be rated as parasites.
It has been suggested that viruses were the predecessors of cellular life, but this is highly unlikely, as they are helpless without their specific host cell. It is much more probable that they were the result of the emancipation of RNA or DNA molecules in host cells. Randomly occurring mutations in the virus genome (which is mostly due to copying errors during replication) can only be corrected in a rudimentary way if at all. This is why viruses are enormously variable, which allows them to easily adjust to changing environmental conditions and even to trick the immune system.
Viruses are really small, as their diameter lies between 0.02 and 0.4 thousandths of a millimeter. The biggest viruses are about the size of the smallest bacteria. They were discovered towards the end of the 19th century on the base of the observation that an extract made from diseased plants that had been filtered through microporous ceramic (so-called Chamberland-filter candle) was still highly active and immediately infected healthy plants. Some 3000 species of viruses are known today and they have their own classification. They affect archeae, bacteria, fungi and all animals, including of course man.
Some viruses are extremely dangerous, as they cause life-threatening or at least very disagreeable diseases such as AIDS, rabies, poliomyelitis, measles, rubella, varicella, herpes, jaundice, encephalitis, influenza, common cold etc. An intact immune system keeps most of them under control and prevents new infections in most cases. Furthermore, the reproduction of viruses can at least be attenuated if not fully blocked by virostatic agents.
Virostatic agents are meant to fight viral diseases. Only about thirty of them are available today and they may cause serious side effects. This is due to the fact that viruses closely interact with their host cell as they depend on its cellular chemistry. This is why viruses have only very few points of attack. The most important ones are the penetration of viruses into cells and their escape from it once they have made copies of themselves: those mechanisms can be inhibited. Furthermore, the metabolism of the host cell can be manipulated in order to slow down the replication of the viral genome and the expression of viral proteins.
The latter means treading on thin ice: it is an exercise in futility if the viral replication is stopped at the price of killing the host cells. Furthermore, viruses are capable of developing a drug resistance even faster than bacteria. One’s own immune system is still the best antiviral therapy. It can be activated by vaccines and booster-injections so that viral diseases do not break out in the first place or at least take a harmless course. Unfortunately, the HIV (or “AIDS virus”) successfully attacks the immune system itself. It can be controlled for many years with a clever combination of virostatic agents. But as they do not really live, viruses cannot be killed in the proper sense of the word.
10 Prostaglandins – not a total disappointment
Prostaglandin research was started in 1933 by Ulf von Euler-Chelpin as he biochemically examined the human seminal fluid. One of the fractions he had obtained stimulated the smooth muscles of the uterus and the heart: von Euler called it prostaglandin. Yet, isolation and structure determination of prostaglandin had to wait until 1957 and 1958, respectively. Later on, von Euler’s prostaglandin mushroomed into a big family of highly active substances that are generated by the body itself.
It was soon realized that prostaglandins were a new class of tissue hormones; this triggered many research programs. The spectrum of effects of prostaglandins and the related prostacyclins and thromboxanes was so wide, their potential applications were so promising, that three leading prostaglandin researchers were attributed the medicine Nobel Prize in 1982. They were the Swedish biochemist Sune K. Bergström, his former student Bengt I. Samuelsson and the British pharmacologist John R. Vane. Those Nobel prizes came rather late; the major research of the three recipients having been done between the 1950s and the early 1970s.
In contrast with steroid hormones that are secreted by specific glands, tissue hormones are formed right where they are needed. This class of active substances is biosynthesized in a relatively simple way, the precursor being the unsaturated fatty acid arachidonic acid that is present in phosphorylated form in most cell membranes. It is dephosphorylated and liberated by the enzyme phospholipase. The second enzymatic step is the cyclization and oxidation by cyclooxygenase, which converts it to prostaglandin H2, the precursor of all natural prostaglandins and thromboxanes.
The other side of the coin was that the non-enzymatic total synthesis of prostaglandins involves many steps with a rather low yield; furthermore such substances are unstable and are rapidly degraded. Even worse: they act antagonistically depending on the type of tissue. For example they may increase or decrease blood pressure, depending on the place where they are synthesized.
The prostaglandin boom reached its apex in the 1970s, when the number of publications on this subject grew into the thousands. Most of the big pharmaceutical companies spent untold millions to find simpler syntheses and stable derivatives. Unfortunately, the majority of those substances had intolerable side effects such as headache, nausea and disturbances of the heart-rhythm. The principal reason was that the exact same prostaglandin can have totally different effects, depending on the type of tissue. In some cases, this applies even to the same type of tissue in another part of the body.
The secretion of prostaglandins can be triggered by mechanical, thermal, chemical or bacterial stimuli. Prostaglandins attach to specific receptors that are located very close to the place where they are secreted. As opposed to the classical hormones, they are not transported by the blood stream to relatively distant places in the body. A most welcome side effect of prostaglandin research was the biochemical elucidation of pathological processes at the level of the individual cell.
The effects of prostaglandins are by no means limited to the smooth muscles. They may also activate the synthesis of other tissue hormones, thyroid gland hormones and cortical hormones – but always close-by. They also affect the mucous membranes of the stomach and intestines as well as the kidney tubules, the immune system and the nervous system. As opposed to hormones, their purpose is a subtle, strictly local modulation of cell functions, which itself normalizes and stabilizes tissue functions: they never have long-range effects.
Such a modulation presupposes a short lifetime that indeed is limited to minutes if not just seconds. This chemical instability is due to several double bonds and an acid group; both are easily attacked enzymatically. Of course chemists and pharmacologists developed a great many stabler synthetic variants and functional mimetics, but they had to be much more selective than the short-lived natural products. A really challenging problem for pharmaceutical research that never was properly solved, even by the local application of adequate preparations, which itself is only possible in a very limited number of cases.
At the time Bergström, Samuelsson and Vane got their Nobel Prize; many pharmaceutical firms had already decided to discontinue prostaglandin research. Some fifty years after the first chemical structure determination of a prostaglandin, a small number of preparations on this base manages to hold a niche market.
An important application is the treatment of stomach and duodenal ulcers and of impaired blood flow. In gynecology, prostaglandins are used to make the uterus contract while the cervical muscles are being relaxed, the purpose being birth or abortion initiation. Prostaglandin antagonists were developed in order to extend a pregnancy and thus postpone birth.
11 Tranquilizers – the stress-relief pills
Mankind has been aware of the fact that certain liquid or solid substances have a profound effect on the nervous system i.e. the mind for at least 9000 years. This is the age of the oldest evidence we have of a fermented and thus alcoholic drink on the base of fruit, rice and honey; it was produced in China. We also have an 8000 year old Sumerian cuneiform tablet that gives detailed instructions for brewing beer with an alcohol content of 3 to 6 percent.
Alcohol (i.e. ethyl alcohol) was most probably the first psychoactive substance discovered in Eurasia: it has desinhibitory, relaxant and euphorizing effects. In Central America and South America a totally different drug culture developed, as the Indians found out about the psychoactive effects of cocaine and nicotine, psilocybin and mescaline. The latter two were not extracted from plants, but from mushrooms; they were mostly used for ritual purposes, as they are potent visual hallucinogens.
Nature’s tranquilizer can lead to addiction and most of them have serious side effects; the regular user may become disabled and totally unfit for work. The synthetic tranquilizers on the other hand do not have those disadvantages and can only harm marginally; they are among the most often prescribed drugs worldwide. Tranquilizers are active against anxiety, stress and restlessness. If the dosage is correct, they do not impair mental activity, concentration and work performance. But due to their relaxing and antispasmodic effects, they may make sleepy and drowsy.
For half a century now, benzodiazepines or diazepams have dominated the tranquilizer market. Starting around 1957, they were synthesized and characterized by Leo Henryk Sternbach (1908-2005), a former colleague of Nobel prizewinning Leopold Ruzicka at the ETH Zurich. In 1941, Sternbach moved to the Hoffmann-La Roche research lab in the USA. The first benzodiazepines Librium und Valium hit the market in 1960 and 1963 respectively: they were unbelievably successful – and still are.
From 1969 to 1982, up to a quarter of Roche’s sales were realized with diazepam tranquilizers. A large number of derivatives of the basic molecules were synthesized, more than a dozen of them were admitted as drugs, because they accentuated certain parts of the activity spectrum. One may recognize them by their trade name that always ends with “epam”.
For many highly motivated professionals and managers, stress, anxiety and the accompanying sleep disturbances were already part of everyday life in the 1960s and 1970s. When it became possible to eliminate those disagreeable symptoms by simply popping pills, there was not much hesitation. Diazepam tranquilizers attach themselves to specific receptors in the “sensibility center” of the brain and thus inhibit the propagation of nerve impulses. This results in an attenuation of excitation and the relief of cramps.
Tranquilizers revolutionized psychiatry, as many patients could now be released from clinics and treated in their homes or in a normal medical practice. Tranquilizers are often prescribed to the inmates of nursing homes and to patients with psychosomatic disturbances. Frequent travelers soon noticed that the sleep disturbances or jetlag occurring after long flights in easterly direction often disappear after a single 2 milligram dose of Valium.
Diazepam tranquilizers always were prescription drugs, but doctors had few hesitations to prescribe them. After all, those preparations have near-zero toxicity and side effects are practically unknown. Some physicians automatically prescribed a low-dose tranquilizer to all their patients and had it printed on their prescription formulary.
Inevitably, this opened the door to abuse, which developed into a problem in the 1980s. The symptoms of inadequate nutrition, lack of sleep and overwork could all too easily be pasted over with tranquilizers, while the roots of the problems were left untouched. The result was that some long-term users became addicted: severe withdrawal syndromes followed dropping off the medication. The abuse of benzodiazepines had particularly bad consequences when the tranquilizer was combined with alcohol and drugs.
By the turn of the century, physicians and their patients had learned to handle tranquilizers carefully. Furthermore, new preparations were introduced; their effects are much more selective than those of the evergreen Valium and its derivatives. They are used for treating very specific diseases and are an indispensable part of modern medicine, particularly in psychiatry and gerontology. Sternbach had the satisfaction of experiencing the rehabilitation of his most important discovery. After his official retirement in 1973, he still came to work (as a volunteer) every day at the Roche research laboratory in Nutley (New Jersey). He died in 2005 at the ripe old age of 97 years.
12 The Pill – a sexual and social revolution
It is hard to imagine that just fifty years ago, sex was considered to be grossly indecent if not downright disgusting; furthermore, any kind of family planning was illegitimate if not illegal. The latex condom officially was meant as a protection from venereal diseases. Of course it was generally used as a contraceptive, which was just tolerated.
Yet, lovers had to live in constant fear of an unwanted pregnancy that all too often ended with the suicide of the afflicted woman. Abortions were not allowed, an illegal one could lead to the hospital or even the graveyard. An illegitimate child was a horrible blemish; such babies were often made available for adoption. The father usually got away with it, but fatherhood proceedings could mean a lot of trouble, particularly since the tests available at that time were not very reliable.
In 1951, the Austrian-American chemist Carl Djerassi (b. 1923) modified the molecule of the sex hormone Progesterone so it was only slowly metabolized in the liver and thus could be administered orally. But even in his wildest dreams, Djerassi could not possibly have envisioned an application of his Norethisterone as a contraceptive. His intention was to develop medication for treating menstruation disorders and infertility.
Things took and unexpected turn due to the research of Gregory Pincus (1905-1967), an American physiologist. In 1936, Pincus was working at Harvard University and succeeded in creating a rabbit from an unfertilized egg cell. The media made a big thing out of this, they were already speaking of “test-tube babies” and “immaculate conception”.
Pincus got into such pressure from his boss and colleagues that he left the university in 1944 and set up his own laboratory in the Boston area; he called it the Worcester Foundation of Experimental Biology. The leading women’s liberation organization gave him the mandate to develop an oral contraceptive. This took some time, but he eventually succeeded; in 1957 Pincus was granted an US-patent for the very first “Pill”. Its active ingredient was Djerassi’s Norethisterone.
Its mechanism of action was based on the discovery that progesterone as well as Djerassi’s derivative prevented ovulation at least in the case of lab animals such as rats and rabbits: for his reason the females could not get pregnant. Pincus closely cooperated with a professor of gynecology named John Rock (1890-1984) who hoped to help childless women to become pregnant. Their common research showed that progesterone quite possibly might also work as a contraceptive in humans.
The development of hormone-type contraceptives then proceeded at breathtaking speed: the very first oral preparation was officially registered in the United States in 1957 as a gynecological drug. But the pressure of American women who needed protection from unwanted pregnancies became so intense that in 1960 no less than three pharmaceutical companies were allowed to sell oral contraceptives: Parke-Davis, Searle und Syntex.
Europe was not far behind and followed in 1961. In Switzerland, the consequence was that within two years the number of abortions dropped by 25 percent. On the other hand, nobody pretends that the sexual revolution that the Pill triggered was a pure blessing. Among other things, venereal diseases literally exploded; furthermore, the Pill indirectly contributed to the propagation of AIDS.
But thanks to the Pill, reproductive medicine finally became respectable. Until then, for politico-religious reasons, human reproduction rated as a kind of divine mystery, interventions were strictly prohibited. Change was rapid, as animal experiments had prepared the ground: at last it was not politically incorrect to state that reproduction and contraception were two sides of the same coin. In 1978, after a lot of tests had been secretly performed, the first test tube baby – Louise Brown – was born in England.
Nowadays, billions of women all over the world consider taking the Pill or using another contraceptive as something absolutely natural: the method can be adapted to every woman’s condition and lifestyle. Important alternatives to the Pill are transdermal contraceptives, implants and intrauterine devices. Emergency contraceptives after unprotected intercourse disrupt or delay ovulation or fertilization; they are based on high doses of the same hormones or derivatives that are used in the normal Pills. Mifepristone, the controversial RU-486, induces abortion and is used in the first months of pregnancy; it is more effective and a much less traumatizing than surgical abortion.
Reproductive medicine helped millions of couples to become parents; in former days, they would have been condemned to childlessness. But the happiest consequence of human reproduction having become an object of scientific research is that – at least in the industrialized countries – practically only wanted children are born. This is the best prerequisite for their development into healthy and happy human beings.
13 Smoking – involuntary suicide
In the first half of the 20th century the incidence of lung cancer grew at an alarming pace. This deadly disease had been extremely rare until the end of the 19th century, but in the following decades it acquired the status of an epidemic and developed into one of the most frequent causes of death – at least for men. During the same period, the incidence of several other malignant tumors, bronchitis and pulmonary emphysema increased dramatically. One generation later the exact same thing happened with women who were now often diagnosed with lung cancer.
It was obvious that the new epidemic was caused by a behavioral change; specialists were quick to point out that most probably this was cigarette smoking. In the 19th century, gentlemen occasionally indulged in smoking a cigar or a pipe. Burning this type of tobacco produces an alkaline smoke that is almost impossible to inhale. Then came World War I: under the horrible conditions of trench warfare a new smoking device spread with lightning speed: the cigarette. The tobacco used for its manufacture was fermented to make the smoke acidic. Inhalation did not hurt the lungs and brought an almost instant nicotine “kick”.
In burning tobacco, the alkaloid nicotine evaporates and is taken up by the respiratory tract and the stomach. This highly toxic and addictive drug mostly affects the nervous system and the blood vessels. Furthermore, the combustion gases of cigarette tobacco contain a long list of other toxic substances, such as formaldehyde, benzene, phenols and hydrocyanic acid. The color and opacity of cigarette smoke is due to an organic aerosol that forms a sticky brown condensate, the co-called “tar”. The tar particles easily penetrate the lungs when the smoke is inhaled.
Tobacco tar consists of several thousand organic substances, some 70 of which are carcinogenic. Having become addicted to nicotine, the cigarette smoker is forced to inhale substances that may cause his death. This is a long-term process: once cigarette smoking begins, twenty or more years may pass until a tumor is diagnosed.
Starting with the 1950s, medical journals published a lot of papers clearly showing a statistical correlation between cigarette smoking and the incidence of lung cancer. The print media followed, but hesitantly; after all, tobacco advertising was an important source of revenue. The tobacco industry counter-attacked massively: with its enormous financial power it could “buy” any number of experts who vehemently denied any causal relationship between smoking and cancer.
At the same time, the media were strongly pressured by the tobacco industry. Influential journalists from all over the world were taken on luxury trips all across the United States, presumably in order to quiet them down and create good will. They were shown tobacco plantations, spic and span cigarette factories and beautifully equipped research laboratories. It transpired only decades later that the latter had clearly confirmed the findings published in the medical literature.
The American health authorities played a pioneering role in starting anti- smoking campaigns. After all, lung cancer and other tumors caused by cigarette smoke had first grown to epidemic conditions in the United States where they caused astronomical economic damage. The first measures were a massive increase of cigarette taxes, the obligation to print warnings on cigarette packs and advertising restrictions.
Soon patients suffering from lung cancer and other smoke-related diseases, the relatives of deceased lung cancer patients, patient organizations, private clinics and entire states in the US sued the tobacco industry for billion dollar damages. The tobacco industry saw the light and braced itself for a slow and costly retreat. Advertising was shifted to the Third World and the product spectrum was massively diversified; enormous investments were made in packaging technology, beverages, food and even real estate. The e-cigarette is a new invention: it generates only nicotine vapor – no tar. It is feared that this may cause a resurgence of cigarette smoking.
In the United States and in Canada, innumerable government authorities, hospitals, airports, plants, schools, restaurants, hotels and entire company headquarters declared themselves as non-smoking zones in the 1980s. Europe and the Far East followed a decade later. People now take it for granted everywhere that smoking indoors in public places and even in parks and on beaches is against the law. Smokers can now be observed in little clusters on sidewalks, even in bitter cold, lit cigarette in hand and rapidly inhaling.
Many countries force the tobacco companies to print messages such as “Smoking is deadly” on cigarette packs. In Australia, all cigarette brands must use the same packages with disgusting pictures of tobacco-induced tumors and other clinical consequences of smoking. In the old days, the male smoker was considered a sophisticated macho, the smoking woman a modern, fully emancipated being. Nowadays they rate as pitiful, suicidal addicts.
14 Sulfuric acid – unpleasant but indispensable
The Arabic scholar Dschabir Ibn Hajan (Geber for us Westerners) who lived in the 8th century A.D., systematically investigated sulfur and its compounds, including of course sulfuric acid (H2SO4). A century later, Geber’s spiritual heir, the Persian Muhammad Ibn Sakarija (Rhazes), succeeded in preparing concentrated sulfuric acid. Today, sulfuric acid is one of the most important basic chemicals; total worldwide production is around 150 million tons per year.
Previously, the sulfuric acid production of a country was a measure of its degree of industrialization. Production always starts with sulfur dioxide that is widespread in nature. Active volcanoes exhale it in enormous tonnages, but industry needs a more reliable source, such as burning the sulfur extracted from oil and gas, or roasting the sulfidic ores of copper, zinc and lead. In countries with no metal ores or oil, gypsum (i.e. calcium sulfate) is reduced to sulfur dioxide with coal and lime. If silica sand and clay are added, cement is obtained as a welcome byproduct.
Elementary sulfur is the major raw material of the sulfuric acid industry. Large tonnages are generated by the desulfurization of sour natural gas and high-sulfur oil. The highly toxic hydrogen sulfide (H2S) must be removed from the gas anyway. Some of it is burned to sulfur dioxide which reacts with two parts of hydrogen sulfide, yielding elementary sulfur. This co-called Claus reaction is very elegant, as water is the only byproduct. Well into the 1990s, the offshore salt domes, mostly in the Gulf of Mexico, were another important source of sulfur. Huge complexes of platforms were built there solely for the extraction of sulfur.
The caprock of salt domes often consists of elementary sulfur and limestone; both were produced by bacteria that reduce gypsum (i.e. calcium sulfate), yielding sulfur and calcium carbonate; the source of the carbon for the carbonate was methane. The sulfur was brought to the surface in liquid form by means of the Frasch process. For the latter, one had to drill into the caprock and line the borehole with a system of three concentric pipes; the outer one brought superheated brine into the caprock where it melted the sulfur (the melting point is 112 °C). Liquid sulfur was brought to the surface through the contiguous pipe with pressurized air, which itself was brought down in the innermost pipe.
Eventually all Frasch platforms had to be written off; the last one closed in the year 2000. Environmental legislation forced the oil, gas and mining companies to immobilize the sulfur dioxide they generate: it cannot be released to the atmosphere through 100 m high stacks any more (which previously was routine) because of the acid rain problem. As the only use for large tonnages of sulfur dioxide is the production of sulfuric acid, many oil companies became big producers of the acid. The result was that the offer grew enormously and that prices collapsed; in certain places sulfuric acid has a negative price, the buyer actually being paid!
For the production of sulfuric acid, sulfur trioxide and not sulfur dioxide is needed. Fortunately, the latter can be obtained simply by air-oxidation of the dioxide, but this is a very slow process. For industrial purposes, it must be accelerated, which in the 18th and 19th century was done in lead-lined towers with nitrogen oxides as the oxidant (this was called the lead-chamber process). Today the much more efficient contact oxidation process is used, the catalyst being vanadium oxide doped with potassium. The trioxide reacts with water to form H2SO4.