October 29, 2002
Where Is Moore's Law for Software?, or, The Mythical Man-Month Strikes Back

Where Is Moore's Law for Software?, or, The Mythical Man-Month Strikes Back

The Commerce Department's series on "investment in information technology equipment and software" have three components--computers and peripherals, software, and "other infotech." If you look at the relationships between the nominal and the real components of each series, you see a very interesting pattern. The figure below graphs "value"--how much in the way of a 1996-dollar unit of computers, of software, or of "other infotech" you get by spending a dollar in each year since 1987. (Since the current national accounts are constructed using 1996 prices as a reference basis, by definition the "value" indices for that year are equal to one.)

In computers, you get 3.6 times as much bang for the buck today as in 1996--and 12 times as much as in 1987. (Never you mind about Bill Nordhaus's estimates that show much faster productivity growth in "computation"--an 8000-fold multiplication rather than a twelve-fold.) But for the other two components, there is no motion in the value indices. According to the Bureau of Economic Analysis, you get only a tiny bit of a little larger real value of software from a dollar today than from a dollar in 1987. And the same for "other infotech" spending.

Can this really be true? Certainly programmers today have much better tools at their disposal than they did in 1987. Why has their measured real productivity not increased?

Clearly this is an issue on which I need to develop an informed view (or borrow somebody else's)...

All these numbers are from the Commerce Department's Bureau of Economic Analysis's National Income and Product Accounts database.

Posted by DeLong at October 29, 2002 04:35 PM | Trackback

Email this entry
Email a link to this entry to:

Your email address:

Message (optional):


Hardware is complex, but can be stamped out in the billions, by automated plants.

Software, despite all the automated aids used in its production nowadays, remains a form of handicraft. As well as this, are all the effects of keeping compatability with previously fielded systems.

A form of Brooks' law is at work for nearly all software, even a lot of open source software, which has been touted as breaking that law.

Open Source software has come to be more reliable than closed source software (e.g. Linux or BSD vs Windows) but is not necessarily any easier or quicker to produce.

Posted by: Michael Comiskey on October 29, 2002 05:20 PM

This is certainly the conventional wisdom among actual programmers.

In Brooks' "Mythical Man Month", he notes a piece of programmer folk wisdom, which is that a programmer has roughly constant productivity measured in lines of code regardless of the language he works in. This is the basic argument for using high-level languages -- they let a programmer do more with each line of code.

In 1977, programmers would typically write in assembly language, in which they would write out each individual machine instruction they want the machine to execute. Let's take that as the bottom line figure of expressivity. The C language is perhaps 3 to 4 times as terse as raw assembly language. Java is perhaps 30-50% more terse than C. The highest level languages in common use -- Lisp, ML, Smalltalk and their ilk -- are maybe 1.5 or 2 times as terse as Java. So someone using best-of-breed language technologies, like Smalltalk or Lisp, will be roughly an order of magnitude more productive than an assembly programmer.

But these best-of-breed technlogies represent only a very small chunk of the total code written! Even Java, despite its astonishing growth rate, is still a minority language. Most programming is still done in ancient warhorses like C, COBOL, and the like. Even the uptake of tools is slow: it's a sad fact that most software is written without modern test harnesses, debuggers, version control systems, code browsers, bug tracking, or editors. The gap between the best and the average shop is so vast that it's difficult to believe that they are in the same business.

But I don't know if this conventional wisdom is true! There are a lot of ways the conventional wisdom could have gone wrong. I mean, omputer scientists don't even know how to design a language that most programmers will reliably want to use. (If you want to hear an earful, find Richard J. Fateman at Berkeley CS and ask him why Lisp never caught on.)

Posted by: Nele Krishnaswami on October 29, 2002 05:25 PM

Brooks himself explains this riddle - it's the whole point of the book. (I think The Mythical Man Month is one of those books no one actually bothers to read).

The problem is that coding is (relatively) simple, and is usually the shortest task of any software project. The rest of the project (gathering requirements, design, documentation and verification) take up the bulk of any software schedule. It's true that we have excellent tools for producing code, but the tools available for the rest of the process are primitive (and, in my personal opinion, of dubious value).

The rest of the process is also less likely to receive the attention it deserves, which is why software projects are notorious for being over budget and past schedule. If I had a nickel for every schedule I saw that had 4 months coding and 2 weeks test... (c.f. the idiots I work for).


Posted by: Mark Wright on October 29, 2002 06:36 PM

Programming and system analysis is damn hard. More large software products fail than succeed, despite advances like high level languages and object-oriented-programming. Productivity between programmers is intensely variable, with the best outperforming the average by as much as 10:1, with better code from the more productive, moreover. Most customers don't know what they want or need and most systems analysts are not very good at helping them figure it out.

Despite everything, programming and systems analysis are still at the craft stage, they have not been standardized in spite of many attempts to do so. Thus each new program is, in effect, a new project with a very high chance of failure. The canned tools and techniques of the prior projects do not increase the odds of success as much as one would hope, and if applied clumsily and without understanding, can prove disastrously inappropriate to the task at hand.

Posted by: Ian Welsh on October 29, 2002 06:51 PM

I wouldn't disagree with either of the preceding comments, but I'd be extremely curious as to how they adjust to get a "real" value of software.

I could believe there is no improvement in productivity per line of debugged code, but it is hard to believe there is no improvement in the cost of buying software. I certainly don't buy the argument for packaged software.

In 1987, you could buy the totally useless Windows 1.0 (or possibly useless 2.0 in December). I can't find what it cost, but since it was useless, it doesn't really matter. The first barely usable version of Windows was 3.1, in 1992 and it cost $149.95 list.

Today you can get the vastly more useful (and easier to use!) Linux for free, and if you must have Windows, you can get XP home edition for $249.99. Somewhat crippled as it is, it is still no less that 10 times more functional than 3.1, probably more like 20 times. (XP includes TCP/IP, browser, file sharing, multitasking, security, to name a few things that basically didn't exist in Windows in 1987).

You could look at game software and see the same
curve. Development tools. Really pretty much anything standardized.

Custom development is another story, although even there in a lot of areas the tools are so much better that it is hard to believe. Persons of limited experience can build a database reporting application that would have required a serious IT consultancy to do in 1987.

Anyway, I would wonder if their hedonic adjustments are adequate.

Posted by: matthew wilbert on October 29, 2002 07:15 PM

The graph begs several related questions when it comes to software:

- What does the "1996-adjusted value" of today's software mean?

- What does "how much software you get" mean?

- How is the productivity of programmers measured?

Is there anything in the report that casts light on these questions? Without a hint, it is difficult to say much of use...

Posted by: Tom Slee on October 29, 2002 08:47 PM

Barbie says: Programming is hard!

Posted by: Daryl McCullough on October 29, 2002 09:04 PM

Anyone have a link to this "investment in information technology equipment and software" report?

Posted by: on October 29, 2002 09:54 PM

I have no trouble believing this in relation to microcomputer software, at least as regards the Mac. If you compare Word, Excel and Mac OS today to their 1987 versions you'll find incremental improvements. Then compare them to Visicalc, CP/M and the primitive word processors that were around for PCs circa 1980.
For those using DOS-based systems, catch-up continued until the release of Windows 95.
The basic problem is the same as with computation - a text editor might contain a few hundred lines of code but it's a huge improvement over Tippex. Now start with the current version of MS Word and consider upgrading the grammar checker, which is pretty much useless - it would take thousands of lines of code and a huge data base to produce something that would still be only marginally more useful.

Posted by: John Quiggin on October 29, 2002 10:10 PM

I have no trouble believing this in relation to microcomputer software, at least as regards the Mac. If you compare Word, Excel and Mac OS today to their 1987 versions you'll find incremental improvements. Then compare them to Visicalc, CP/M and the primitive word processors that were around for PCs circa 1980.
For those using DOS-based systems, catch-up continued until the release of Windows 95.
The basic problem is the same as with computation - a text editor might contain a few hundred lines of code but it's a huge improvement over Tippex. Now start with the current version of MS Word and consider upgrading the grammar checker, which is pretty much useless - it would take thousands of lines of code and a huge data base to produce something that would still be only marginally more useful.

Posted by: John Quiggin on October 29, 2002 10:10 PM

Almost certainly this valuation judgement is skewed by the prevalence of custom software projects.
Mass produced software ( Operating Systems, Office Suites, Photoshop, etc.) are hundreds of times better than their predecessors. How do you value these things? The internet is not possible without the software, networking costs have plummetted with increases in robustness, security, ease of configuration, etc.
The value of Microsoft WindowsXP (for most organizations) is greater than Linnux which is free, and is an enormous improvement over Win 3.1 (that may be a hint in figuring out a problem with valuation (Total Cost of Ownership).
On the other hand custom software is better, but is largely the result of skilled labor, and the tools are only very recently good, and only rarely used in most custom environments. I would imagine that projects that do not scale (custom projects) and rely on more and more expensive labor would not see much of a value improvement.
As stated above, Java is a vast improvement in productivity over its predecessors, and Microsoft's .NET is a substantial improvement over Java, mostly in pre-built software components or object-oriented APIs- which means reusable parts are more accesible to custom applications. When these become more widespread, and they will as one of the more recent advancements is getting people better translation of legacy data and systems, we should see these valuations improve.
Another likely phenomenon is that increasingly business intelligence will be built into software systems- currently most obvious with something like CRM systems, and then organizations will settle on more standard ways of doing these things (less custom), also driving costs down. There is no reason I can think of why this trend will not continue into other more specialized fields like restaurant management, and previously unaffordable CRM bits will increasing be affordable to smaller business.

Posted by: theCoach on October 30, 2002 04:16 AM

Most software written is NOT packaged for mass market sales. It's custom designed and built for a specific set of situations in a business. For every Microsoft Word in existence, there are twenty or more equally complex custom applications being written (often from scratch rather than as an incremental improvement to existing product).

Further, the actual writing of code, as someone says above, is the easy part. The hard part, and the most error-prone is deciding what code to write. Packaged software products such as Word have an advantage in this as well. The basic concept behind Word is relatively clear (build me some software that will replace my typewriter, then expand by adding additional functions in the years to come). The basic concepts behind custom software projects are all obscure, and often not fully communicated from those who want to use the software to those who are designing and coding it.

It's not easy, and the productivity tools that are available are for the most part aimed at the easiest part of the job (coding) and not at the most difficult parts (requirements gathering and system design).

Posted by: Chuck Nolan on October 30, 2002 04:50 AM

Part of the problem is that the software world is fractured into a large number of small domains-- all with different productivities, different ROIs. Commercial applications, OS's, utilities and development tools, technical and scientific applications, software maintenance, telecom and web applications, and much more.

Some of these areas didn't exist ten years ago; others have gone through vast changes. It's hard to see how you can average all these things together into a single 'value' number

Posted by: Matt on October 30, 2002 05:00 AM

I am a computer scientist by trade, but here are my "economic" reasons why productivity *growth* of programmers are quite low:

1) Network effects play a HUGE role. Not just in the language, but also in the tools people use. A much more "productive" language or tool might come along, but will be ignored due to lack of 3rd party support and trained programmers as well as the need to make older system compatible with the new language or tool.

2) The not-invented here (NIH) syndrome. The market for "libraries" of software building blocks is underdeveloped, meaning wheels are reinvented many many times every day. There is a reason for this: the risk of a library vendor going under or one who is unable to keep up with your changing demands is often worth the cost of going alone. In some sense the source code and the programmers' interface for the library (but not necessarily the library itself) are both public goods, and hence a market will undersupply them.

3) Opportunity cost of speed and control. You can easily switch to more productive languages and tools, but you face an opportunity cost in terms of lost execution speed of your program, lost direct control over the hardware. Some call it "machismo", but whatever the reason programmers have been slow in making this trade.

4) Human resource factors play a significant role in programmer productivity. It is a hard activity, and having the right working environment has a huge effect on productivity, possibly as big or bigger than any technical improvement.

Posted by: Amit Dubey on October 30, 2002 05:44 AM

It's also worth pointing out that, unlike most measures of productivity, the denominator here is dollars rather than hours. For all we know, programmers today could be productive as hell compared to their 1987 counterparts-- but if their pay has gone up accordingly, their "productivity" by this measure is not going to budge.

Posted by: Paul Zrimsek on October 30, 2002 07:59 AM

Let me be boring for a minute.

Brad DeLong's question seems to have prompted several responses seeking to explain why the measured productivity of programmers has not increased, but that is jumping guns or counting chickens or crossing bridges -- choose your metaphor.

I can't find a link to the original source, but it seems to me that until we know something about what the graph actually claims to show (that is until we know what the authors mean by "how much real investment you get for a dollar" as it applies to software) then other discussion is moot. Again, I'm not trying to say "how should we measure programmer productivity" but the much more mundane question "what did the authors claim in their repor".

Or have I missed something?

Posted by: Tom Slee on October 30, 2002 10:06 AM

For a long time, there have been sensible "hedonic" adjustments to account for the fact that computer hardware gets better every year. I don't know as much about the quality adjustments for software, but in principle adjusting for the quality of software is much, much harder. I am with the group that thinks this says something about measurement rather than (necessarily) something about the world.

Posted by: Steven Berry on October 30, 2002 10:37 AM

As a computer instructor who occasionally interfaces with the Real World (tm), I would hazard a few guesses based on my experience:

1) Roughly 90% of anything anybody does on a computer is type text (for example, blogs). This doesn't count games. Most word processors haven't advanced all that much, and I might suggest that programs like MS Word are seriously broken and have regressed. Newer e-mail/blog/web/instant message/etc programs are needed for new hardware, and sometimes do nifty new things, but for the most part they are adapting a standard human interface.

2) Given half a chance, I'm sure programmers would love to experiment and develop all sorts of fun stuff. But they have to eat, and so they do what they're paid to do, which is usually rather quotidian. New and powerful software is being used for essential but standard tasks, like databases.

3) The computer industry is driven not by the leading edge, but by the number of computers being used. Most people are a bit technophobic, and don't know or care what program is being used as long as their e-mail works and their spreadsheets impress the boss.

4) The speed and power of chips as predicted by Moore's Law has gone into powering larger color monitors, being able to handle larger amounts of data, and being able to more quickly handle huge files (eg video). The basic ability to handle digital audio hasn't changed much since before the personal computer made it cheap and easy. I use many programs for this, and the only real differences are in the the interface.

5) The personal computer itself is the technological breakthrough, and once the interface was easy enough for your average wage slave and powerful enough to do things other equipment couldn't, the basic productivity levelled off. I notice the graph starts at 1987, a few years after the Mac introduced it's GUI and about the time Windows was coming in. This is also the year after the Supercomputing Act of 1986 led to what we now use as the Internet.

Aside: I wonder if that graph factors in vaporware.

Posted by: Dave Romm on October 30, 2002 01:44 PM

The graph is actually how much "value" you get for a constant $ spent. Basically you get a lot more in the way of hardware today with not much more in the way of software. My office suites are not sustantially different aand more expensive than they were in 1996. In fact I find it very difficult to find much difference in the 1997 office and todays. Similarly operating systems. The only difference I do see is that newer programs tend to gobble up more and more memory for substantially fewer increases in user functionality. The problem with these numbers is that at least two series should be taken together-computers and software. Whatever gains made in value for hardware are held back by the software they use. "Whatever I'll giveth Microsoft taketh away." The question is why ? The tempting conclusion is that software production, like OS and office suites, are monopolisitcally produced and thus less likely to be innovative.

Posted by: Lawrence on October 30, 2002 01:46 PM

Chuck's comments about the difficulty of user requirements capture in customized applications hit the core problem with trying to increase software-writing productiviity. Think about a coder who knows practically nothing about the user domain and must work off a set of formal specifications produced by some sort of system design process. What are the chances that the formal specifications include all the normally taken-for-granted things known by users but completely foreign to the coder?

Rapid prototyping and early development of testing procedures can help surface some of these unstated assumptions earlier, but the problem is fundamental to all attempts to formally specify reality. We run into the sort of problems that yield debates about AI and the bases of human cogniition rather than practical solutions. So it would be surprising if custom software development showed hardware-like progress; that would imply that we had cracked the classical AI problem. I'm not holding my breath.

Posted by: steven postrel on October 30, 2002 04:03 PM

My intuition is that the valuation is skewed. In addition, we are probably facing the same engineering problem as adding a new floor to a building. Is a 100-story building 100 times more valuable than a 1-story building? How much more engineering does it take to build the 100-story building and what does the valuation/floor look like? Again my intuition tells me that this depends on other factors.

Posted by: theCoach on October 30, 2002 08:33 PM


A number of people have been making the point that the measurement might be a bit off. Meanwhile, I've noticed that many with a background in programming have been trying to defend the numbers, assuming they must be right ;)

I was only a wee little 12-year-old in 1987, but I was already a bit of a C hacker, so I'll give my observations between then and now:

First, the tools *have* improved. But many of them are geared to doing things that people never did before.

In 1987, unless you worked on a Mac (or to a lesser extent, an Amiga or an Atari ST), you didn't care about user interfaces. ISVs expected their audiences to be computer literate. Even internal programmers in corporations could expect the users to be highly skilled white-collar workers (i.e. the accounting dept.) whereas now just about everyone has to use computers.

Today, the most-widely used tools, RAD tools and GUI builders, are oriented towards making user interfaces. So in some sense, there must have been productivity gains in this task, but maybe the numbers wouldn't reflect this because task previously wouldn't have been done with as much care and effort. So, obviously, there would be some benefit, but here the numbers wouldn't show it.

There are other useful tools as well: bug-tracking and automatic testing tools as well as tools to help with basic software design. I think these have improved dramatically , but for small projects, such tools really are overkill. The bigger the software, the more time you'll lose keeping track of bugs, and the more important (financially) designing the software well will be. Now, if the improvements in these tools are correlated with an increase in the proportion of programmers working on "big" projects, you may not see the productivity increase. If this were true, the producitivity gains might be "Malthusian". There is another "mesurement error" as well: how do you measure the value of bugs? Bug-tracking and automatic testing software may help programmers make their software more reliable, but again this may not show up in dollar-and-cents figures.

On the other hand, many of these things don't really help with the "guts" of writing code, especially "back-end" things like operating systems and database managers. My first compiler, which I got in 1986, already had the tools which I commonly use today: a "make" program to manage building the software from the source, a version management system, a debugger. All these things still form the basis of programming. There have been improvements, but most of them have been marginal. You could take a Unix C "systems" programmer out of 1986 and plunk him in 2002 without much confusion.

And this gets back to my previous post: for a number of reasons, software professionals are, ironically, *very* conservative about adopting new technologies. It is still hard to find a programming environment that fully supports productivity-enhancing techniques like "literate programming", programmers routinly choose to work in hard-to-use languages (C and C++) because of backwards compatibility and the perceived benefits of being "close to the hardware", standard libraries are created and adopted very slowly because software shops generally won't use them unless the source code is available, and with a few exceptions (i.e. Microsoft's MFC, C++'s STL, the Free Software Foundation's glibc), usually people don't like releasing the source. But each of these decisions may well be "rational".

There is another problem again: users may choose a new platform because they see a 10x benefit from it. The benefit may not be as big, or even negative, for programmers. There was a shift from mainframe -> timesharing -> text-based PC's -> GUI-based PC's. In each case, a lot of software needed to be re-written, including programmer's tools. One example: a mainframe programmer in the late 80's noticed how primitive database programming tools were on PC's compared to the state of similar tools on mainframes. He started a company to market a product, Powerbuilder, that vastly improved the productivity of PC database software writers. But if PowerBuilder and competing products like Visual Basic only made PC programmers on-par with their mainframe counterparts, was there really an improvement? Or is this another "Malthusian" gain? Another example: Borland's TurboC was a wonderful environment for writting software. The compiler was fast so you almost never waited for the computer, a feat many others still haven't copied. But TurboC was geared to DOS, and when users made the shift to Windows, programmers started switching to Microsoft Visual C/C++, in no small part due to its better support for Windows. In some respects, early versions of Visual C++ weren't as refined as TurboC, but in part to meet the demands of ends users, software developers made the change.

So yes, I'd say: there definately has been improvements in the tools. But many of the improvements were "eaten up" by platform changes and new tasks. Also, I want to re-iterate a point Neel made: there is a HUGE differences between the tools (and human resource practices) used in the best shops vs. those used in the worst shops.

Perhaps, to draw the connection to a point made earlier on this website, perhaps the fact that the that "IT service" industry, which tells you how to re-align your business around computers, is ironically, grossly underdeveloped when it comes to the most IT-intensive industry, programming itself. The building blocks might very well be out there, but for a large part, the INFORMATION about how to use them isn't widly disemminated.

Posted by: Amit Dubey on October 31, 2002 05:36 AM

Brad --

Brooks himself has addressed this very problem back in 1985 in his conference paper titled "No silver bullet". Recent editions of "The Mythical Man Month" have included it as its last chapter.

In a nutshell, Brooks argues that writing software consists of two parts: 1) Understanding what the problem is and how to tackle it, 2) Encoding the solution. According to Brooks, software tools can increase the efficiency of part 2, but not of part 1. This means that once the tools get really good, programmers will spend most of their time working on part 1, at which point it doesn't really matter anymore how much better they get at implementing.

It's the old "decreasing returns on capital" story, with software tools being the capital.


Posted by: Thomas Blankenhorn on November 1, 2002 12:57 PM

There was an instructive paper by Bob Glass some years ago (In his book on real time software, if you are looking for it) in which he analyzed software errors Boeing had found after some product has been shipped. Such errors are, of course, extremely expensive to fix and may be dangerous as well. Many of the errors fell into the category of not completely understanding the problem. Very few of the new programming tools give much help for this problem.

Posted by: Jim Miller on November 1, 2002 03:15 PM

Since I got my first paying job in the software industry in late 1986, this sums up my entire career! Running to stay in place...

John Quiggen above mentions that it takes an enormous amount of work to make incremental improvments in existing software. He's right and one (probably the most important) reason is that with software, the posititves are usually addititve, but the negatives are multiplicative.

The benefit of a new feature is the new feature and usually doesn't interact greatly with other existing features. The drawbacks (bugs, performance drag, security holes, added complexity of the program) are all things that compound one another. A particular bug for example, is no big deal except that when it occurs in conjunction with another, previously innocuous, bug, the computer crashes. Or, security hole A is no big deal by itself, but combined with security hole B from another feature compromises the entire machine. The extra 200kb feature X adds isn't such a big deal, except that now the program can't load without paging on a machine with 128 Mb of RAM...

Better tools are helping to manage the complexity, but the complexity is still growing exponentially, so the tools really just let us tread water. Otherwise we'd drown.

Posted by: The Other John Hawkins on November 3, 2002 10:02 PM

>> In a nutshell, Brooks argues that writing software consists of two parts: 1) Understanding what the problem is and how to tackle it, 2) Encoding the solution. According to Brooks, software tools can increase the efficiency of part 2, but not of part 1.

And I guess my main point in my above posts is that, quite often, part 2 isn't as efficient as it could be, either. Not everyone uses the best tools available, sometimes frustratingly so (i.e. why do so many people still think garbage collection is always bad idea? aaargh!!)

Posted by: Amit Dubey on November 4, 2002 04:46 AM

Coming to the party late, but what the hell...

IIRC, Brooks holds that most flaws enter in the design, not coding, phase of software products. Software has come a long way since The Mythical Man Month was first published, and in the anniversay edition of the book, Brooks notes many incremental improvements. There's probably a vast and very useful literature on the design process as well, but I doubt that most managers of bespoke business software project have even the slightest familiarity with it.

Real life example: until July, I worked for one of the now much-reviled financial services conglomerates. My manager's background was in God-know-what; he certainly wasn't technical. Par for the course in my career, but most of my other bosses at least made an effort. In a meeting where the team was fretting over deadlines, he proposed that if the project started missing milestones, he'd just grab some time from an Indian outsourcing group our company maintained.

I was surprised. "Doesn't that fly in the face of Brooks' law?" I asked him.

He blinked. "What's that?" he replied, sneering.

Posted by: ScissorsMacGillicutty on November 27, 2002 10:04 AM

Technologist have personalities to.

In the quickening of the material plane, certainly the cpu compression and disk storage speed/shrinkage speed by volume and cost is explosive, still in Jan 2003. Ad to that the FiberOptic/Wireless/Satellite Externalised Systems Explosion.

I say that the human personality is under acceleration to yield to the prime directive of evolution itself.

Confusion, added to beliefs in freedumb sickness, create a barrier to the human genius potential and act as a brake on the evolution of survival aptitude.

Perhaps everyone you ever met is in some form of evolutionary mutiny (deathwish aerobics).

This is a species wide psychical pre-clarity virus inherited from all of the people who came before us and are now laying sacrificially in the collective archeology.

I hope to impress upon you that moore's law is on a collision course with physics, as we now understand physics.... and just as that wall is reached, another door opens.

Can you imagine another Moore's law in a post confusional, post matrix sociology where humans no longer breed to kill by using genetic roulette and we are freed from the viscosity of Feminist dominated monogamy, where the sexual authority is 100% unearned and undeserved.

This is an emotionally suppressive environment, not conducive to team spirit. I do not wish to dwell too long on genius supressants, but more to alert you to the need for the virtuebios.
It is an evolvingly accure intention reference system for all human behavior.
An evolving evolution meter if you will.
An Immortalisation Aptitude feedback system that, I believe, will become mandatory and highly desired, as our current modality of breed to kill is, relativistically speaking, a massive para-cultural child molestation or more bluntly, child killing intention habit. This is intrinsically very self limiting.

Technology and 'spirit' are on course to resolve and infuse. I truely believe that my "VirtueBios" is the next phase of Moore's Law.
It is a psychical/social version of the cpu in the sense that humanity can self organize just like at the molecular level.
Our death habit, as a belief system, is draining genius potential from all the "Players".

And it's a curable psychical modality.

Can you relate?

Robert Ray Hedges = Google searchable term
Sedona Az
Fly the Phoenix

Posted by: Robert Hedges on January 11, 2003 12:54 PM
Post a comment

Email Address:



Remember info?