Monday, March 26, 2007

Carbon Dioxide Emissions Solved ? – or - Jimmy Carter Saves The World

I would like to apologize for a short diversion from fundamental thermodynamics, but this is a blog, after all. I'll get back to the second law shortly.

There are many opportunities for renewable energy but one stands out because it may inherently sequester carbon by enhancing a natural process that is already a major long term carbon sink.

Ocean Thermal Energy Conversion (OTEC) extracts solar energy through the temperature difference between warm surface water in the tropics and cold deep water: Warm water is used to heat a fluid with a relatively low boiling temperature and the vapor runs through a turbine to generate power. Below about 1,000 meters, water temperatures are just above freezing everywhere in the ocean, so the vapor can be condensed by cold water brought up from the deeps. There are many different approaches using different fluids, variations of heat exchanger and turbine technology and different platform and cold water pipe designs (most OTEC designs are floating platforms, "grazing" in the open ocean). OTEC has been demonstrated as a technically feasible method of generating energy – I worked on the design of “OTEC Early Ocean Test Platform 1” in the late 70’s, which demonstrated various parts of an OTEC system on a platform converted from a small oil tanker, nominally producing one megawatt – but it was not economical at the time.

The unique feature of OTEC is the cold water. The temperature difference between the warm and cold water is small, so OTEC extracts only a small amount of energy from each cubic meter of water and thus uses prodigious amounts of water. In one design, a thousand cubic meters of water per second are required to produce seventy megawatts of net output power.

The cold water is also laden with nutrients. In the tropics, the warm surface waters are lighter than the cold water and act as a cap to keep the nutrients in the deeps. This is why there is much less life in the tropical ocean than in coastal waters or in the Arctic, Antarctic and North Atlantic and Pacific. The tropical ocean is only fertile where some feature such as an island or a submerged canyon causes an upwelling of cold water. One such place is off the coast of Peru, where the Peru (or Humboldt) Current creates a nutrient laden waters near the Equator. In this area, with lots of solar energy and nutrients, ocean fertility is on the order of 1800 grams of carbon uptake per square meter per year, (mostly by microscopic photosynthetic phytoplankton). In the adjacent waters, and most of the rest of the ocean, fertility is typically well below 100 grams per square meter per year. This creates a rich fishery, but most of the carbon eventually sinks to the deeps in the form of waste products and dead microorganisms.

Throughout the entire world, these microorganisms currently sequester about forty billion metric tonnes of carbon per year. They are the major long term sink for carbon dioxide. A certain amount of carbon is converted to calcium carbonate, which eventually becomes limestone or other sedimentary rock and under very special circumstances, some of the organic material even becomes oil. Algae and other microorganisms that fed on them in ancient seas are the source of today’s oil.
We can make various estimates of fertility enhancement and sequestration, but a reasonable guess is that an OTEC plant designed to optimize nutrification might result in as much as 10,000 metric tonnes of carbon dioxide sequestration per year per megawatt. The recent challenge by billionaire Sir Richard Branson is to sequester one billion tonnes of carbon dioxide per year in order to halt global warming, so an aggressive OTEC program, hundreds of several hundred megawatt plants, might meet this.

In economic terms, optimistic guesses at OTEC plant costs are in the range of a million dollars per megawatt. Since a kilowatt-hour of electricity generated by coal produces about a kilogram of carbon dioxide, a carbon tax of one to two cents per kilowatt-hour might cover the capital costs of an OTEC plant in carbon credits alone. The equivalent in gasoline would be ten to twenty cents per gallon. With gasoline above two dollars per gallon and electricity above ten cents per kilowatt, these are not entirely unreasonable charges.

The actual effectiveness of OTEC in raising ocean fertility and thereby sequestering carbon has to be verified, and there has to be a careful examination of possible harmful environmental impacts – an old saying among engineers is "it seemed like a good idea at the time". An OTEC plant optimized for ocean fertility will also be different than one optimized to generate power, so any OTEC based carbon scheme has to include transfer payments of some sort – it won’t come for free. Finally, who owns the ocean thermal resource? Most plants will be in international waters, though these waters tend to be off the coasts of the developing world.
As to this last question, there is an additional benefit: Another saying among engineers is "we aren’t trying to solve world hunger". In this case, though, we may have. Increased ocean fertility may enhance fisheries substantially. In addition, a problem of OTEC is that the energy is "stranded" far at sea, possibly on a drifting platform – a thousand-mile long extension cord is not an option. However, many OTEC advocates have suggested making nitrogen compounds for fertilizers at sea – this is an energy intensive process, now mostly using natural gas, so fertilizers are expensive. OTEC fertilizer could be sold to developing countries at a subsidy, where they would greatly enhance farm yields.

It would seem that the Branson Challenge is met, and more, but as much as I would like to claim the Branson prize, I cannot. I worked on OTEC for a contractor under the ERDA, in a program initiated by President Carter, who I am told, was an OTEC enthusiast. I’m sure Jimmy Carter will find a good use for Sir Richard’s check.

Friday, February 9, 2007

The 3.9 Laws Of Thermodynamics

“A theory is the more impressive the greater the simplicity of its premises are, the more different kinds of things it relates, and the more extended is its area of applicability, Therefore the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content concerning which I am convinced that, within the framework of applicability of its basic concepts, it will never be overthrown.”

(Albert Einstein, 1949)

Classical and statistical thermodynamics was one of the great intellectual achievements of physics of the last two centuries, and one of the least well known, especially statistical thermodynamics. Einstein’s interest in classical thermodynamics is especially remarkable in that it is an area of physics that he is very rarely associated with, especially now that classical thermodynamics is generally regarded as subject only worthy of juniors in mechanical engineering, not physicists. What is much less well known is that Einstein himself was not only interested in classical thermodynamics, but actually invented and patented the absorption cycle refrigerator. This device was once the dominant means of home refrigeration and will probably become an important tool in renewable energy, since it can be used to make relatively low temperature heat, such as solar energy, into cold. This in turn would greatly reduce the need for fossil fuels to generate electricity on hot days. Hot days, with lots of solar energy, are the very times when air cooling is most needed so this is potentially a match made in heaven.

However, we get ahead of ourselves, except to remark that absorption cooling is a good example of one key to making best use of solar energy. We have to understand the concept of availability, or quality, of energy, versus the required use of the energy to optimize our use of it. In the case of cooling, our energy need is to create relatively low temperature changes, at relatively normal temperatures. This is “low quality” energy, as opposed to electricity, for example, because it is at a low temperature difference. We should be able to use other low quality energy, such as the relative diffuse heat available from simple heat absorbing solar panels to satisfy this need. This type of efficiency is often referred to as “Second Law” efficiency, which is our cue to starting to discuss thermodynamics.

There are (sort of) four laws of thermodynamics.

The “Zeroth” law is needed, though it is obvious, because it is not derivable from the other laws, and because it is needed to form a basis for temperature measurement. It states that when two bodies are each at the same temperature as a third body, they are at the same temperature as each other. I guess its obviousness is why it is called the “Zeroth” law.

The next three are often described as “You can’t win, you can’t break even, and you can’t get out of the game”. The first two are pretty accurate statements of the first and second laws, but I’m not sure about the third.

The First Law of Thermodynamics

The first law is conservation of energy and mass, and says that energy (and mass) can’t be created or destroyed. It is generally stated in mathematical terms that mean the amount of energy and mass stored in a designated volume is the amount originally in the system, that plus going out, minus that going in. (The sign convention is that work by the system on the outside world is positive, and that done by the world on the system is negative.) There are a lot of things that are conserved in physics; mass, energy, momentum, spin, charge, current, voltage drop, (for subatomic particles called quarks) truth, beauty and charm, and for physics departments in general grants and other funding. It is often possible to analyze a system by writing a conservation law. (And in the old days of blue book exams, writing the appropriate conservation statement was generally worth at least partial credit.) It is a law of nature that we believe in, because we have never seen it violated, but we can’t prove it from other laws.

For example, structural problems involving buckling can be very complicated, but one way of simplifying them is to look at energy conservation. Buckling occurs when the geometry becomes unstable, and the initial deflection under load into a shape that causes the structure to become less stiff, so that it deflects more, becoming even less stiff and so on until failure. Calculating the initial deflection is fairly simple these days with Finite Element Analysis, but looking at buckling requires repeatedly recalculating the stiffness and deflection and as the structure changes, and this is not as easy (this is a “geometrically non-linear analysis”), and some FEA packages don’t have this capability (and most that do charge extra for the capability as an add-on). However we can look at the first step and calculate the energy absorbed by the structure (the force it resists the load with times the distance it deflects) and the energy from the load (the load time the distance it deflects). If the structure absorbs more energy than the load yields, the First law tells us that the structure is therefore stable, without having to go into a non-linear analysis. First law approaches are very frequently used to get a "big picture" sort of approximation, without getting into the details.

Another example of this is proposals to use algae for sequestering carbon at power plants, generally as biofuels. A First law look at this suggests some problems without going into any details. Algae get the energy to make carbon dioxide into themselves from solar energy. A coal power plant is maybe 30% efficient, so it uses more than a ton of coal per megawatt, or hundreds of tons per hour. Peak insolation is around 400 Btu/ft2/hour, so after a bit of unit conversion and "swags" at efficiency, several square miles of sunlight will be required to power algae sequestration, which is probably not locally feasible at a powerplant. This is not to say that algae fuels will not have an important role in either replacing fossil fuels (after all, algae made them in the first place) or sequestering carbon (again, algae were responsible for changing Earth's primordial atmosphere into the one we enjoy now by sequestering carbon dioxide), but it will take more than bioreactors at a powerplant.

We have to realize that there are a lot of different appearances for energy, basically different units, but its all energy, and it’s all conserved. The term work is synonymous with energy, though it generally implies a mechanical form of energy. In mechanical terms, energy is force acting through a distance. The archetypical example is a raised weight. A weight weighing one pound one foot off the ground has one foot pound of energy. If we release it, it will fall, heating the air slightly through friction, possibly making a noise when it hits. Most of the energy will then become mechanical energy in the object it lands (and in itself) and deflects under impact. This mechanical energy will then swiftly degrade to heat. Energy can also be stored as chemical bonds in fuels or food (a calorie is a unit of energy, enough heat to raise the temperature of a kilogram of water one degree centigrade (specifically from 14.5 C to 15.5 C) , or as charge in a capacitor, or in any number of other forms . The thermal equivalent of mechanical energy is quite small. I was once worried about the disk brakes on my car rapidly becoming hot to the touch and computed that stopping repeatedly from twenty miles per hour should only heat them a few degrees. Sure enough, the parking brake was not releasing fully, and I had to replace the pads shortly thereafter. The chemical equivalent is even smaller; consider how far you have to run to burn off a small amount of food.

The English unit equivalent of a calorie is a British thermal unit or Btu, the amount of heat required to raise a pound of water 1 degree Fahrenheit, specifically from 59.5 F to 60.5 F. There are roughly four Btu in a calorie. The metric equivalent of a foot pound is the Joule (in the meter-kilogram-second, MKS, system a Joule is the force of one Newton acting through one meter) or the erg (centimeter-gram-second, CGS, system). You may sometimes see nutrition labels on Canadian food packages marked in kilojoules (kJ) instead of calories, because the Joule is part of the consistent metric MKS system.

(A note here for purists: I hereby acknowledge and affirm that “pound” is not properly a measure of mass, but a measure of force. A slug is the amount of mass that produces 32.2 pounds of force in the Earth’s gravity, which produces an acceleration of 32.2 feet per second squared. Thus the units of mass are actually pounds force times seconds squared per foot. However, “pound mass”, lbm, is frequently used to mean 1/32.2th of a slug, and “poundal” is sometimes used as the unit of force produced by a pound mass in the Earth’s gravity. In metric units, a kilogram is actually a unit of mass, and produces 9.805 newtons of force in Earths gravity acceleration of 9.805 meters per second squared. All of this is of some importance because it is frequently possible to figure out what is happening in physics because the units have to match.)

(Another note for purists: There is a proper style for capitalization of units. In general the abbreviations for units deriving from proper names such as newton, joule, or British are capitalized, even when used in the middle of a compound unit, like kW, but not the units themselves. There are some other rules which I forget frequently. I hope, in general, to get them right, though I reserve the right to make mistakes. I will nonetheless welcome comments correcting me.)

Power is often confused with energy, but power is the rate at which energy is transmitted. A horsepower is 550 foot pounds per second (I guess the horse didn’t rate a proper name), a watt is one joule per second, and a horsepower is about 745 watts in the US (there is a European horsepower which is slightly different, but it’s rarely used).

One important transformation of energy in classical thermodynamics is pressure and volume to and from heat, since generally heating a gas causes it to expand, or increase in pressure, or both. If we hold the pressure constant, the volume increases as we heat a gas. For a finite area, pressure times area is force, and if the volume increases, we have the area moving, which is thus force times a distance, or work. This work is the transformed energy we added as heat. The reverse also works. If we apply enough pressure to a gas to cause it to decrease in volume, it gets hotter, which anyone who has touched a bicycle pump after it has been used to blow up a tire has noted. Because this exchange is so common, it is worth noting the equivalent units for work and heat energy here - one Btu is 778 foot pounds. Finally, note the order of “foot pound”. A pound foot is not the same as a foot pound. By convention, a pound foot is a unit of torque or twisting – a force of one pound applied at the end of a one foot wrench.

The issue of work and heat energy brings up another set of terminology. The internal energy of a substance (generally a gas or liquid) is the heat content. The enthalpy is the heat content plus the pressure of the gas times the volume per unit mass, or “specific volume”. Enthalpy is the total thermodynamic energy in a substance (relative to a standard condition) and is the quantity we are generally interested in for heat engines and similar systems. For this reason, enthalpy is generally the characteristic listed in tables of thermodynamic properties, and if you need internal energy, you generally have to compute it yourself from enthalpy, specific volume and pressure. (This is not, of course, the total energy of all types – hydrogen has a certain energy in terms of its temperature, pressure and specific volume but much more chemical energy if it was burned, and very, very much more if it undergoes a thermonuclear reaction.)

The Second Law of Thermodynamics

The Second Law is not really a law any more, in the sense that a physical law is something which we believe to be true based on never having seen it violated in nature, but cannot prove from other laws. Statistical thermodynamics can prove it, but classical thermodynamics was not able to describe the Second Law in terms of physical phenomena. There are numerous statements of the second law, the most important being the Clausius statement: “No device that operates in a cycle can produce no effect other than the transfer of heat from a cooler body to a hotter one.” We can reverse the sense of the useful energy flow and also say that this means we can’t have a refrigerator that doesn’t require an input of work. The Kelvin-Planck statement is equivalent, but from the viewpoint of work: No device can operate in a cycle and raise a weight (or do other work) by exchange of heat with only one reservoir. This last machine is a perpetual motion machine of the second kind, and has been occasionally proposed, usually as part of a scam. This machine doesn’t create energy from nowhere (that would be a First Law perpetual motion machine), but rather purports to take energy, usually heat, from some vast reservoir, and turn the heat into work without rejecting any heat to another reservoir at a lower temperature.

We can prove the equivalence of the two statements and the Second law by postulating two systems:

First, consider a system that included a Clausius (violation) heat pump, a cold reservoir and a conventional heat engine, (which we know works) connected to an external hot reservoir. (Note that the cold reservoir is within the system, and the hot one is outside). The Clausius heat pump would transfer heat to the hot reservoir from the cool one, and the heat engine device would derive work from transferring work to the cold reservoir from the hot reservoir. The cold reservoir would not get any colder, though, because the Clausius device would push heat back to the external cold reservoir. The system thus comprises a Kelvin-Plank (violation) heat engine, producing work from only a hot reservoir. This is not a violation of the first law, though – the net heat leaving the hot reservoir is equal to the work the system makes.

Second, consider a Kelvin-Planck heat engine and a conventional heat pump, with two external reservoirs, one hot, one cold. The Kelvin-Planck engine is only connected to the hot reservoir and produces just enough work (by taking just enough heat from the hot reservoir) to drives the conventional heat pump, the two together are a Clausius heat pump, moving heat without input of work.

Using this logic, the Second law can be expressed as noting that if we put a hot reservoir in contact with a cold one, heat flows from the hot one to the cold one, but not the other way around.

This consideration gives us Carnot efficiency, which is the theoretical best efficiency of a heat engine. A heat engine operating in accordance with the Second law takes heat from a hot reservoir, rejects some of it to a cold reservoir, and what is left is useful work, which is required by the First law, since we can’t make energy vanish, any more than we can make it appear. Second law efficiency is basically about making best use of energy and suiting the sources to the use. Kenneth Deffeyes, in Hubbert's Peak: The Impending World Oil Shortage (Princeton University Press, 2001, ISBN 978-0-11625-9) compares poor Second law efficiency to making every bit of a cow into hamburger, instead of first cutting off the steaks, and he also gives an example in his own home – his furnace captures more than 90% of the heat of the fuel it burns and uses it to warm his house. But, the fuel burns at a very high temperature, so this high temperature could have been used in a heat engine for something more valuable, like making electricity, then the heat rejected by the heat engine would still be warm enough to heat his house. As a matter of fact, it is likely he could have heat his house with even less fuel by using a heat engine to run a heat pump, plus using the waste heat. Barry Commoner, in The Poverty of Power (Knopf, 1976, ISBN 0-394-40371-1) bases much of the book on this point.

The Third Law is basically that there is a zero temperature.

We call the Second and Third laws of Thermodynamics laws of nature, because prior to the advent of statistical mechanics, we had to take this on faith, because we have never, ever, seen it violated just as we have never seen energy or matter mysteriously appear from nowhere, or because we have always seen two things that are at the same temperature as one thing the same temperature as each other. However classical thermodynamics didn’t have any way to really prove this from other principles.

By looking at the Second Law in various complicated ways, the concept of entropy, another property of a substance based on its temperature, specific volume and pressure was developed. Entropy is the change in enthalpy (energy content) divided by the absolute temperature. (Absolute temperature is degrees Rankine or Kelvin. Rankine is degrees F plus 460, since absolute zero is -460 F. Kelvin is degrees C plus 273.) This is derived by calculations involving the Second law and Carnot cycle engines to develop a temperature scale, but it never had a very good physical basis under classical thermodynamics. Whereas you can point to the vibration of the molecules in a substance and say that is heat, internal energy, or enthalpy, it wasn’t clear what entropy was. Popular literature often refers to entropy as disorder or something like that, but that still doesn’t give us a real handle on it.

However, statistical thermodynamics, especially when combined with a core concept of quantum mechanics, that energy only comes in discrete packages, which though small, cannot be subdivided, gives a wonderful answer and puts us on solid ground understanding entropy, which turns out to be a major key to the universe. (Steven Hawking is known for having derived a very simple equation that calculated the entropy of a black hole by combining quantum mechanics and general relativity, thus leading him to believe he was near a Grand Universal Theory of Everything, the Holy Grail of theoretical physics. Alas …) Modern thermodynamics is really based on the Second law, which turns out to be not really a law after all.

More in the next episode. “Same Bat-time, same Bat-station.”

Monday, December 18, 2006

On Dogs and Ferries

I like to take examples from one field and see if they give lessons for other fields, sort of the "case history" approach so popular in MBA programs. One concept I insist upon is optimizing design for environment and task, and there is a bit to this, especially with the increasing capability of Computer Aided Design/Computer Aided Manufacturing (CAD/CAM).

Perhaps the best and certainly the oldest reminder of the advantages of design for environment and mission is the diversity of dog breeds. The Nova Scotia Duck Tolling Retriever, for example, is not only suited for the Canadian Maritime climate, but also is specifically adapted for Nova Scotia hunting practices and waterfowl species: It "dances" along the shoreline to entice (toll) curious ducks into the hunter’s range and is small enough to work from a canoe.

My wife and I chose a "Toller" as a companion animal because they have the genial disposition and the intelligence of Labs and Goldens, and a playful nature due to their "tolling" behavior, but are not so large - they are the smallest of the retrievers. However, we had to accept that the "Nova Scotia" part gives a thick double coat and furred feet, which means lots of grooming. We now have a West Highland White Terrier, a "Westie". The Westie’s white coat was developed for a purely utilitarian purpose – Westies were bred to flush small game, and their color prevented them from being accidentally mistaken for game and getting shot. They have adapted well to their current companion role – they are small, playful, cheerful, and quite biddable, at least by terrier standards. However, again their original design comes into play – "terrier" refers to the earth, and more specifically aggressively digging to raise small game. Terriers are also very scent oriented, and like most scenting dogs, they role in interesting scents, presumably to keep them for later reference. Thus, a Westie’s idea of a great end to a day of digging in the mud and rolling in various rotten, smelly things is to leave little footprints on the rug on their way to leaping into your lap. Dogs aren't available other than "off the shelf" or maybe "proven parent" (literally), so when a dog is outside its original intended environment and mission, there are compromises. You can't get a dog custom designed to your exact wishes.

What does this have to do with ferries (and yachts, and military craft, and workboats …)?

One of my great frustrations as a naval architect is that so many boat owners insist on "off the shelf" designs or "proven parent", (or more often "proven parent, but"). In some cases, it is even required by law, and in the coming months we will probably see this kind of thing for new ferries in San Francisco. This is understandable, but wrong. People are used to buying things like cars or computers by going to the store and selecting one "off the shelf". Such products are made in huge numbers so it is feasible to spend lots of money on machinery and tooling to reduce the cost of making them through mass production techniques like automation – the tooling for a new model SUV is typically over a billion dollars, but the factory builds one every two minutes.

Boats, especially commercial boats, are not made in large numbers, though, and there aren't too many "Ferries 'R Us" outlets. The 2004 WorkBoat magazine's annual survey of new construction listed a total of 442 self-propelled commercial and military boats and small ships built by 45 different shipyards, so each yard only averaged ten boats. Even in any one shipyard, this comprises several different designs, so it isn't common to see more than three boats built to any one design. (One of the few large orders that year was for remote controlled target boats - for some reason they are needed in large numbers.)

A well-proven method of estimating the cost reduction in shipbuilding for follow-on vessels suggests that after about a twenty percent reduction from the first to the second, (which covers engineering, initial setup, bid preparation and so on), the cost drops by four percent each time production doubles, and this only applies to certain categories of costs (metal, for example, doesn't get any cheaper once you have bought about half a boat's worth). Thus for even thirty identical vessels, the average cost for the first two dropped to 89% of the first, but the average for 30 is still 75% of the cost for the first one. Even this doesn't work out to a lot of savings in the overall picture. The first cost of the ship itself is a relatively small part of the lifetime operating expenses (just like a dog, oddly enough), because this cost is spread over the entire life of the vessel. At a recent workshop on ferry economics, it was pointed out that three employees added as much expense as $1.5 million of vessel first cost.

The obvious conclusion from this is that it doesn't take much optimizing to beat any economics of mass production. A relatively small savings in fuel, for example, covers a lot of first cost - saving one gallon per hour (probably less than 2%) is worth nearly $100,000 in first costs. Since the key to optimizing is designing for the specific service and environment, an "off-the-shelf" boat is probably a false economy. For example, a very important aspect of optimization is speed. There are differences between a twenty knot hull and a twenty-five knot hull that make a difference in power and fuel, and a boat designed for twenty-five knots will use more fuel at twenty knots than one designed for twenty knots in the first place.

This is especially the case for commercial craft that have to be meet regulations. Various regulations can make a big difference in operating costs. Each country has some peculiarities that provide hard restrictions on the design. By exceeding a certain parameter a boat may change from one class of regulation to another, resulting in a major change. US rules, for example, have a break at 100 gross tons admeasurement from a "small passenger vessel", essentially a boat, to a large one, essentially a ship. Admeasured tonnage is the internal space of the vessel with certain peculiar exceptions and exemptions (under the unique US system) that can be exploited by a knowledgeable designer. Nest time you are on a ferry, look around for a panel held on by strange looking bolts, probably at the rear of the vessel. It may be marked "tonnage door – keep clear". This door exempts the volume it encloses and as far as the rules are concerned you are sitting in a "temporary covered space", none of which counts for tonnage. If you could get inside the hull, you would find some special very large frames. The space outboard of the inner face of these frames doesn't count either. Even the way the stern of some ferries slants forward is often part of a scheme to reduce admeasured tonnage; the door trick works best if the deck above the door is not the longest. Tonnage dictates the number of crew, their licenses and many other aspects of operation and construction cost, so most ferries are very close to 99.9 gross tons (find the certificate of inspection – it’s posted somewhere in the main passenger area). A design intended for some other country will have been designed to another set of rules and will have to be modified to achieve the savings from these peculiarities (and to eliminate weird things to meet the regulations it was designed to) and this can have other consequences that move the design off an optimum.

Owners also fear that a new design is somehow unproven and high risk. This is only true if the whole concept is entirely new, but like new dog breeds, most saltations in fast craft design are hybrids of one sort or another. They thus can be analyzed pretty reliably from data on the pieces that were combined. A friend and I have developed a new design concept for a high-speed hybrid planing hull that uses aft mounted hydrofoils. (Some people may remember a sort of dumpster with wings undergoing sea trials on the southern San Francisco waterfront a few years ago.) This concept is sufficiently new that I would be reluctant to advise an owner to adopt it without some significant trials and research, but it was still reasonably predictable from combining hydrofoil data and planing craft data. It is also important to realize that there isn't much new under the sun as regards ship design - after all, we naval architects think of either Noah or God as the first ship designer (too bad the discussions about contract change orders have been lost). In the case of our radical design concept, when we sought a patent, we found that it had been patented over fifty years ago, and hybrid hydrofoils may have seen service in the German Navy in World War II.

The design of just about any monohull or catamaran is pretty routine, especially with current techniques of computer aided engineering. I have to reluctantly admit that naval architects are just another flavor of engineer, not some sort of maritime Harry Potter, and that ship design is just routine engineering. (I always wonder if people who want an off-the-shelf boat also want an off-the-shelf building, freeway or bridge.)

Most designers use an integrated design system that allows exact definition of the hull shape, which is then transferred to structural, hydrostatics and hydrodynamic analysis software to verify safety and performance. Modern structural software can then automatically optimize hull structure for weight or cost (which in turn depends on the shipyard's practices) as well as ensuring freedom from failure. Finally, the ultimate (and usual) proof is model tests. A scale model is run in an instrumented basin. These tests can be done with or without waves, and can model steering and waves and wind. It is expensive - typically a good test program might cost $50,000 or more, but that is a fraction of the cost of the boat, and less than the cost of that gallon of fuel per hour. It is worth noting that recent tests I ordered for a modification to an existing patrol craft produce enough fuel savings to pay off the tests and the modification in less than three years.

Actually, a proven design is often higher risk than a new one, because it is usually "proven design, but". An owner likes a boat already in service with some other owner and wants one just like it, with just a "few minor changes". The changes usually add weight, adversely impacting speed and stability, or they can have other unforeseen consequences, but the design is "proven", so no one checks it out carefully. One of my favorites was an owner who needed a large hydraulic winch (among other things) added to the aft deck of a workboat. The parent design didn't have enough room to put the new power pack in the equipment space aft of the living quarters, so it had to go in the engine room forward of the living quarters. The high-pressure hydraulic lines ran through the overhead of the captain’s cabin, and of course, they constantly leaked. Weight additions especially are a very high risk for higher speed craft, which depend on dynamic lift – fast craft are weight sensitive, and it is easy to add just a little bit too much weight so the overloaded hull loses speed all out of proportion to the added weight.

The computer has also reduced the cost of specialized design and construction. The same model developed for analysis is transferred to a detailed structural and equipment design package, which provides all sort of automatic tools for doing the routine development of the structure, piping and other details. The result is a three-dimensional "product model". Before modern drafting was invented, an "Admiralty Model", an actual scale model, was used to develop the wooden ships for that period's "Iron Men". We have gone full circle and now "Silicon Person" has replaced paper drawings with an electronic 3D scale Admiralty model. However, the new model "talks" directly to computer controlled equipment that cuts metal (or foam for fiberglass molds) automatically. The model also provides data to material and work control software and does all the various e-business stuff (ship design and construction collaboration over the Internet is now routine). There are lots of other improvements, mostly enabled by computers, that have radically improved productivity as well, and any reader not yet bored can find out more in the Journal of Ship Production, Transactions of the Society of Naval Architects and Marine Engineers or Marine Technology. However, this all means that the cost of a boat to a new design is much lower than it used to be. One shipyard, which was already using computer steel cutting, reduced construction labor costs by 20% by adopting a product model and other improvements on the first 160' offshore supply vessel (OSV) they used it for - without any increase in design costs.

The bottom line is that if an existing vessel or design provides exactly what you want, fine, but otherwise, don't compromise and don't be afraid to get just what you want. It won't be very expensive or risky in the short run and will be well worth it in the long run.

This also has much larger implications for business, especially as regards the impact of CAD/CAM. We have the potential of a new age of customization, due to computers, and flexible automation. It is now feasible to manufacture certain types of objects as largely custom designs, or at least as highly customized ones. This latter case is worth exploring just a bit, again in a marine context. I was chief engineer of Munson Manufacturing, a shipyard building aluminum workboats. The method Munson used was to design a custom boat within the constraints of a parameterized hull form that was readily constructable on the yard’s flexible tooling system. The hull parts were designed in 3D CAD and computer cut, and all of the systems were standardized on a basic level of components, details and interconnections. (Everything was also parameterized in a computerized bid and ordering system.) This enabled a custom boat, that suit the owners needs very well, to be built at a low price by simply assembling standard parts.

I can see this as being readily applicable to a wide range of products where high levels of customization provide high enough increased quality (where quality is defined as best meeting the customer’s needs) to be worth the additional cost. However, clever use of CAD/CAM and related technology would also reduce the cost. Two examples that are immediately feasible are clothing and medical devices, (though I recently saw a challenge on the Food Network where one competitor used a CNC milling machine to make a gingerbread house).

In the case of clothing, there are any number of sources for reasonably affordable scanning devices which can produce a three dimensional model of an individual. (I have used a Faro Arm capable of digitizing a person for measuring propellers – it cost about $15,000, but its accuracy of 0.0001" is probably excessive for measuring people, and even then it’s probably affordable.) Once a person was digitized, a garment designed on a standard size could be parametrically morphed so it fit the individual, and it could be adjusted as required to make it look and feel right, lapel width proportionately adjusted and so forth. It could be tried on virtually, and cloth and color selected, and then a computer driven cutter would cut all the parts. It probably would cost more to do the sewing locally, but this might well even be covered by reduced shipping and handling costs, and especially layers of production, wholesaling, advertising overhead and so on. In fact, since a custom garment would be guaranteed a sale at the full price, the cost of all those discounted items in end-of-season sales (and worse for profits, showing up at TJ Max) would be eliminated. We could see a return of the "bespoke" suit, and this would certainly be a boon for some women who could get clothes that fit well and looked the way they want, without worrying about what is on the shelves just because teenage girls think it’s cool.

A more important application would be medical devices. Items such as hip and knee replacements would be custom made from a parameterized model (available in design packages like AutoCAD Inventor and Solidworks), developed by digitizing data from CAT scans and physical measurements, instead of being chosen from a series of premade, sized products. A three dimensional printer (a sort of ink jet device that spits out plastic instead, and does it in three dimensions) would make plastic parts that could be used to make molds, or molds directly. In the case of metal parts, most stainless steel medical devices are investment cast anyway, so the 3D printer would make the patterns. The data for the part would go over the Internet remotely, and the part would be sent out a day or so later. (We got our precut metal from the cutting service 200 miles away two days after we uploaded the computer data to their site.)

In the long run, clever design approaches, especially using the concept of the 3D product model tightly linked to analysis tools, could revolutionize a wide range of products. This would not only make the product better, but it could bring back manufacturing jobs – This would also counteract what I see as a depressing tendency towards homogenization in products, let's call it the Wal-Mart effect: The best products might be more expensive than those mass produced overseas, but their better fit to the specific customer’s needs would justify the cost, and in the long run, the ultimate prestige item would not be a brand, but a custom, one of a kind.

Readers interested in more information on modern ship production can go to http://www.sname.org or http://www.NSRP.org. A typical suite of ship design software can be seen at http://www.shipconstructor.com. Those interested in Nova Scotia Duck Tolling Retrievers can go to http://www.nsdtr-usa.com or http://toller1.com .

Ocean energy

In Julius Caesar, Brutus tells us:

"There is a tide in the affairs of men,
Which, taken at the flood, leads to fortune :
Omitted, all the voyage of their life
Is bound in shadows, and in miseries.
On such a full sea are we now afloat;
And we must take the current when it serves,
Or lose our ventures."

This seems most appropriate here because much of the point of this blog is opportunities, and especially those related to the ocean (including literally tidal power).

As I've noted, I am the co-chair (with Dr. Dan Walker of Oceanic Consulting, http://www.oceaniccorp.com ) of the Society of Naval Architects and Marine Engineers ad hoc panel 17 on ocean alternative energy. Note that though this panel has just been formed, naval architects have been involved in ocean alternative energy for decades. I gave a presentation at the SNAME annual meeting on October 13, and if I can figure out how, I will upload the pdf, though meanwhile, I'll also convert it to a post, but an earlier version (given at a SNAME session meeting in May) is posted on the Autodesk sustainability website (www.autodesk.com/green) and on Tim Colton's marine website (www.coltoncompany.com, under May news).

This panel was formed to try to enhance communication between these projects to share data and to improve public awareness of the opportunities for oceanic sources of renewable energy and I will try to keep this blog updated with information from this panel.

There were representatives from a number of active ocean energy projects (which is why the panel was constituted), some of which included various types of floating wind turbines, at least one of which (a tension leg platform) is going into service off California this winter. I see offshore floating wind turbines as a very good idea for a number of reasons, mostly having to do with the "not in my backyard (or ocean view)" problem, and also the problem of bird and bat kills (see www.batcon.org for bat issues). Ocean winds also tend to be better distributed both in geography and in time than on land, and of course, there is a great deal more ocean real estate available. However, effective offshore wind turbines will require advanced type platforms, mostly, mainly using concepts from the offshore oil industry developed for marginal deep fields, because the cost of a bottom founded platform goes up as the cube of water depth, and simple barge mounted turbines will probably be neither cost effective, nor have acceptable motions. The three main concepts are spars, semi-submersibles and tension leg platforms, which will each be discussed in further posts, but can be looked up in Wikipedia.

There were also a number of wave energy projects, and at the May meeting, an active Ocean Thermal Energy project (www.seasolarpower.com).

Technorati Profile

Tuesday, December 12, 2006

What this blog is about:

I am mainly interested here in alternative and renewable energy, hence the name. Fans of "Wallace and Gromit" will recognize it as homage to the scene in The Wrong Trousers (from Ardman Animation / Nick Park) where Gromit is reading Electronics For Dogs. This is certainly not to imply that the readers of this blog are somehow not intelligent, (someone once said "Thou art wise and liberal as a dog", meaning it as a deep compliment) and people familiar with the Wallace and Gromit series will know that Gromit is generally the wiser one of the pair, and often more technically involved in the details.

My particular interest in W&G comes from my lifelong interest in invention and inventors – as a child my hero was "Gyro Gearloose" of the Carl Barks Disney Comics and Stories series and I always wanted to be an inventor, so I became an engineer. (And, an inventor, I guess. I co-hold US patents 4,974,539 and 5,134,954 so it is official, and I would submit that they have a certain whimsical appeal worthy of W&G or Gyro Gearloose.)

In particular, I am a naval architect, the only engineering discipline not having "engineering" in its name, but that’s because it pre-dates use of the term "engineer". (Originally engineers built and operated engines of war, and were soldiers, until "civil" engineers, engaged in similar tasks but for peaceful pursuits, and presumably, more polite, in that they didn’t heave rocks into your castle, came about.) Naval architecture, or more often, naval architecture and marine engineering (NAME), is concerned with all engineering involving ships and boats, so it is a wide field, often referred to as "a mile wide and an inch deep". (I have a license in both mechanical engineering and specifically in naval architecture and marine engineering, so I guess this is probably accurate as well.) More recently, NAMEs have also become involved with other objects in the sea, most notably platforms and vessels for oil exploration and development, which range from more-or-less conventional ships which happen to have oil drilling equipment aboard to bottom founded platforms that only float for a few days while they are being towed into place from shore and installed.

I have had the privilege of working on a variety of marine projects, some of them very unusual, including about five years in the offshore oil industry in the US and the UK. I have also worked on conventional cargo ships, working craft such as research vessels, ferries, tugs and fishing boats, military vessels, marine propulsion systems and most oddly, amphibious vehicles. I am now a senior naval architect supporting various smallish vessels, including military patrol craft, working craft, rescue craft and so on. I am chair of the Society of Naval Architects and Marine Engineers Small Craft Technical and Research Committee ("small craft" to SNAME means up to about 200’ long or so, it seems). I am therefore also occasionally involved in a variety of interesting small ship projects on the side.

However, I am also co-chair of a newly formed ad hoc panel on ocean renewable energy, and that brings us back to the main subject. I was originally involved in renewable energy during the Moral Equivalent Of War, (MEOW) as part of a test of Ocean Thermal Energy Conversion, a method of making power out of the temperature difference between warm surface water and cold deep water. This is, of course, a solar energy system, using the ocean itself as a collector. The project did demonstrate the technical feasibility of OTEC as a means of energy production, but the price of oil plummeted shortly thereafter, so alternative energy was not as interesting. In addition, and this is critical, during MEOW, interest rates were very high – I recall home mortgages above 13%. All renewable energy is paying up front to get something free later, so interest rates are a key element to economic viability, and those rates doomed most renewable schemes.

Since then, though energy has become dear again. We may have passed a peak in oil production, most people are concerned about global warming, which is a consequence of releasing trapped carbon from fossil fuels, and most obviously, energy is expensive, and in the US, oil represents about a billion dollars a day of exported dollars, which I can’t help but believe would improve our economy. Fortunately, there are a lot of opportunities to do something about it.

In 2004, the US used about 88 "quads" (88.5 quadrillion British Thermal Units – 88.5x10^15 Btu – where a Btu is the amount of energy needed to heat one pound of water one degree Fahrenheit), of fossil fuels, which is a phenomenal amount. On the other hand, this is about the same amount of solar energy that fell on just Fort Dix and White Sands Proving Ground in New Mexico, and though the solar energy is without efficiency losses, so is the fossil fuel number. This gives us a lot of opportunities. Possibilities for improved efficiency also provide opportunities as well, and a Btu saved is more than a Btu earned, since you don’t incur efficiency losses to convert it (just like after tax savings are more valuable than pre-tax earnings).

I am somewhat disappointed that many of the opportunities for energy generation and conservation, both ocean and shore side, are not well known, though I attribute much of it to the tendency of engineers to only speak to each other, and that is what this blog is mainly about.

There are at least six completely different viable solar-to-electricity technologies, but the public only hears of photovoltaics, and very occasionally Stirling cycle, both of which I would suggest are the top competitors for the least cost effective approaches in the short term.

One major electric load is air-cooling during the summer, which we saw during the recent power crises in Texas and California last summer, as well as the Northeast power blackout. Albert Einstein, in the 30’s, invented absorption cooling, a cooler run by heat, which subsequently became the most common form of home refrigeration for two decades, in the gas heated form, and is still used for recreational vehicles and other applications where electricity is not available. Solar powered absorption chillers were developed some time ago and are literally off-the-shelf, and a good match, since the need for cooling is fairly well correlated to the local availability of solar energy. These systems can also be run by waste heat (like auto exhaust heat), or fuels such as agricultural waste, which might work well in a food processing plant. There are also even simpler and less expensive means of reducing the insolation (solar heat load) on buildings, that reduce the power required for air conditioning.

As an example of a very humble, but simple opportunity, placing a waste heat exchanger in the stove or oven hood in a restaurant or bakery, and using it to heat water could save several kW-hrs per day per installation. A restaurant hood typically costs $10,000 but the heat exchanger alone would probably cost at most $300, so it would be essentially invisible in the cost of a new hood, could be retro-fitted to an old hood and would pay for itself in less than a year. I note that Whole Foods, which has an active commitment to renewable energy, has not fitted these devices to their in-store bakery, but again, I attribute this to ignorance of the possibilities. Admittedly, this is a small thing, as are many energy conservation measures, but from Zechariah, "Who hath despised the day of small things?".

There are also opportunities in the developing world – all of the global warming scenarios I have seen assume the developing world follows a fossil fuel path similar to that of the developed world, but we have to ask why. Fossil fuel is now much more expensive, we have alternatives, and some of those alternatives work better in much of the developing world than in the developed world – solar energy, using thermal processes that don’t require sophisticated manufacturing should be all over Sub-Saharan Africa, since much of the cost of making these systems and running them is labor, and the other factor is how much the sun shines. This might not amount to a whole lot of environmental impact but it would alleviate carbon load, and alleviate poverty. It might even be possible to farm biofuels in these places so the world supply of fuels would be diversified and increased without carbon load, and rural farm incomes could be increased – consider that farming the algae Botryococcus Braunii in ponds covered with plastic sheets might yield 100,000 liters of fuel per hectare – this is 250 barrels per acre, around $5,000 per acre, which is good farming money even in the US, and would mean increasing a small African farmer’s per capita income by a factor of ten.

Hybrid technologies - mixing fossil fuels and alternative energy - may be possible: Electricity from solar power plants can be used to make hydrogen (and oxygen) from water. (In fact, recent developments have shown direct electrolysis of water to hydrogen and oxygen on an appropriate matrix of iron oxide doped with other metals.) We can use the hydrogen with coal to make liquid hydrocarbon fuels, and burn coal in the pure oxygen, giving very high temperature gas. This is salted with metal vapor, ionized and used in magnetohydrodynamic generators (using a conductive fluid passing through a magnetic field). The exhaust from these generators is hot enough to run conventional steam boilers, with the result that the total power plant has very high thermal efficiency due to the high temperature difference, has lower capital costs than exotic gas turbines, and the exhaust is pure carbon dioxide, which can then be sequestered.

Obviously, there are many more such opportunities, most of which I haven’t thought of, but someone has. Many of these are policy changes that modify behavior rather than technology, though technology is sometimes an enabler.

In this blog, I intend to bring out my own crazy ideas and those of others, especially those that combine technologies to achieve a new result. This is especially the case with developments in materials – sometimes new materials enable an old process to be economic or feasible.
I will also try to explain engineering concepts that are important, especially as regards thermodynamics, and more especially "unifying concepts", such as entropy, which ties together temperature, energy and even time, in a concept that Kittel (Thermal Physics, J. Wiley & Sons, 1969) says "This is a definition whose simplicity leaves us breathless." (You might consider this to be homage to another of my heroes, Isaac Asimov, the great science explainer.)

I also reserve the right to deviate into other matters. This includes the purely nautical, as that is my day job, especially where it impacts energy and policy matters such as urban ferries or cases that are just interesting, such as sailing yachts, and super yachts (both of which are under the purview of my SNAME committee) and probably propellers, which I find interesting (feel free to skip this).

Much of my interest in small craft is in small shipyard producibility and especially the impact and implementation of CAD/CAM to improve producibility. This seems a pretty narrow subject, but my point here is that because of the unique problems of shipbuilding, shipyards have been forced to be very clever at exploiting CAD/CAM, and this is a sort of "case study" which might be interesting in other areas.

I probably also will comment on other issues of interest, such as the state of engineering, K-12 science and technology education and other areas that affect engineering. I may also deviate further afield – the SNAME biography I use with my technical papers ends with "His current interests include application of canine evolution and behavior to ship design" – so I will probably work some dog material in, though I do promise, despite my admiration for the late James Quinby, San Francisco admiralty lawyer and poet, not to include any doggerel (my co-author, Paul Kamen, is the author of the response in verse to one of the commentors on our recent SNAME paper on urban ferries – I don’t have his talent for either verse or humor, except for bad puns, I’m afraid).

I hope that this blog is entertaining and enlightening, and better yet, I hope that someone may find a lead to a worthwhile idea in it. I will do my best to be fair and accurate, and give appropriate references and sources for further information. I will also try to keep them short, and much less formal (and less rigorous) than if they were technical papers.

In a sense, this is what I would like to see as "op-ed" pieces if I only I was a decent writer.

Finally, the opinions expressed herein are those of the author only, and do not represent official policy of my employer, or the Society of Naval Architects and Marine Engineers, or the host of this blog. I am solely responsible for its contents.