Past and Future of Education

candicecoverWhen I was a freshman in high school, I sat chafing at the horribly slow pace of droning lectures on material that I had already read in the assigned textbooks.  I would often find myself nodding off to sleep (teenagers’ circadian clocks are time shifted such that early morning classes are too early) or day dreaming.  I still got near perfect scores on exams.  I had, after all, read the textbooks cover to cover at the beginning of the term.  In my day dreaming, I often envisioned my ideal university experience.  Mine was always in ‘Ox-bridge’ like settings, where tutors who sat in cozy wood-paneled rooms with floor to ceiling bookshelves guided one’s reading lists while serving ‘tea & biscuits’, lectures were always optional, and neither were part of a set course, but were meant to provide opportunities for learning to prepare one for comprehensive exams to demonstrate competence in the material.  I took the phrase, “reading for the exams” literally.  You can imagine my disappointment to learn that university was the same as high school, but with bigger lecture halls.

But I took a different route.  I earned my lower division general education requirements from part-time attendance, and occasionally challenging courses, at community colleges, but earned the lion’s share of my college credits entirely by comprehensive exams after self-study.  I bought many of my text books at the Stanford University bookstore, others I borrowed from my roommate who had gone to CalTech before Stanford Med School.  I got very decent scores on the exams.  After earning my BS, I was admitted to Stanford Graduate School as a part-time (Honors Co-op) student, sponsored by my employer, Fairchild Semiconductor, to take or audit classes on semiconductor device theory and fabrication.  (I audited more than I took, as I needed to learn fast for my job, more than I needed the credit hours.)  I was looking forward to graduate school at Stanford, having projected my high school vision of what a graduate level education would be like.  Sadly, graduate school was very like undergraduate, but with smaller lecture halls.

OK, for a bit of honest self-disclosure.  I was born hard-of-hearing.  I get almost nothing from lectures.  Yes, I wear hearing aids… but they amplify noise just as well as voices.  The background noise of hundreds of other students squirming in the seats partially masks the teacher’s voice.  When s/he turns his/her back to write on a board, I can no longer augment my hearing with lip-reading.  Oh and that sound of nails on chalkboard that can make one cringe(?); that’s what ordinary chalk on chalkboards sound like when amplified.  What I hear is, “Now, ‘ooking a’ ‘is equat’n, it is obvious ‘at mmff..mmm {sskrsksk} and thus we have proo’ ‘at mmff.n..mmm.  Any questions?”  Although my hearing problem made it harder for me, the lecture method of instruction given our modern technology and tools, is perhaps the most antiquated and least efficient means of delivering needed information.  Can there be a better way?

But first, let’s look back at history to learn why we do this?

In pre-historic times, much of instruction came in two forms, one was oral tradition the other was direct.  Studies of hunter gatherer cultures have documented that a pre-literate society depended upon story telling to pass down history, culture, and lore.  They also pass on skills by direct demonstration and correction of mistakes by younger members of the group.  One could say that the lecture method of instruction is the direct descendent of the campfire story.

But that’s not the whole story.  As population density increased, people gathered in ever larger settlements.  With these came the need for a means of communications over both distance and time.  This lead to literacy.  But oral culture continued in full flower along side written accounts.  Further, literacy was far from universal.  It was dependent upon class and in places even gender.  Instruction in literacy was private, handed down within families or exclusive institutions.  As cities became civilization, higher education was often conducted in small group settings, again limited by class and often gender.

During these times, the written corpus was expensive.  The materials upon which they were written were either time-consuming to manufacture, in limited supply, or both.  Consider the problem for the European scholars in the Middle Ages.  Their prefered archival material was parchment, which was sheepskin that had to be laboriously processed.  It was so valuable that, to us, priceless texts from the ancient world were scraped off to provide a surface for religious texts.  These palimpsests occasionally yield historically important texts as modern methods to recover the original writing have become available.  Also consider the means by which one published a new or republished and old work.  The handwritten manuscripts had to be hand copied by calligraphic copyists.  This was manpower intensive work that could only be done by skilled workers.  It meant that books were horrendously expensive and not easily obtainable.

So, higher education was limited to those who could afford first to be privately tutored to become literate, and then rich enough to afford either a library of their own, or to travel to that innovative institution of the late-middle-ages, the university.  But even at the university, access to the limited supply of books was effectively metered and controlled.  Their value so high that they were seldom allowed outside of the library walls.  This meant that much of the instruction from the university professors was still given orally, during lectures.  This established a tradition that continues today.

But things did change.  The introduction and rapid evolution of the printing press brought the end of hand copied manuscripts.  The mass manufacture of paper to replace the far more expensive parchment lowered the cost of the media.  The combination dramatically increased the availability of books.  This meant that university libraries burgeoned and reading for one’s exams became a real activity.  But it did not eliminate the cost and cultural barriers to education.  Enrollment in ‘public’ schools took money that working class people simply had limited access to.  Higher education at the universities was out of the question for the vast majority of working and even the middle-classes as they developed from first expanded mercantile trade and later the early industrial revolution.

Around three hundred years ago a new institution began to change all of that.  The grammar or elementary school began to be introduced, increasing literacy rates.  The introduction of the “lending library” wherein one could check out the now lower cost book increased access to scholarly (and not so scholarly) materials.  Benjamin Franklin said, “The public library is the people’s university.”  Many people, Franklin included, obtained the equivalent of a university education nearly exclusively by extensive reading.

But somehow, our universities continued their tradition of mass lectures, despite that fact that reading was more instructionally, and cost, effective.  When our high schools were instituted, they slavishly copied the format.  Tradition dies hard.

Which brings us to today.  A recent innovation in universities is to record lectures and make them available online (MOOCs).  This extends the cost effectiveness of lectures, as more students view them per lecture.  But this is still a lecture, with an even bigger (virtual) lecture hall.

But with technology, the cost of books have come down as well.  Not only is the printed codex, or random access paper books, so cheap that paperbacks are more often thrown away after reading than stored, we now have means to provide digital electronic materials that make the nth copy of the book essentially free.  Our textbooks should be all available on our tablets (using a PenTile Matrix high-resolution display, of course!).  Every reference book, every text-book, that one will ever need for school from elementary through graduate school and beyond, should be instantly available and essentially free to all.  But they’re not.

Why aren’t they?  Part of the reason is again historical.  While many advanced textbooks are written by university professors, they are published by traditional hardcopy publishers.  Indeed, there is such a cachet attached to hardcopy, that people are willing to pay high prices… partly because they have no choice.

How can we fix this?  First, we should recognize that it is a national (if not international) imperative that such materials be freely available, as the new “people’s university”.  Second, we should have our universities work together to create national course curricula and materials for use in digital form.  Digital media, as opposed to the printed word or photograph, can directly meld text, diagrams, photos, video, and even interactive tutorials.  Consider the educational value of simply producing and maintaining these materials by the various university departments involving their students in the endeavor.  These should cover material from early elementary school to graduate school, in nearly all of the basics of a modern education.  This does not mean the end of the specialized text-book.  These will always be with us… and yes their authors should be compensated by the reader.  But the core materials and textbooks should be universally available at no cost to the student.

Secondly, we need to have a national (if not international) means of educational measurement and evaluation.  This means a series of comprehensive examinations that are also available free.  They should be proctored at public schools, accessible to all by being local.  It should be possible to earn diplomas, certificates, and degrees, entirely by examination.  Admittedly, many professional degrees (e.g. nursing, medicine, etc.) cannot be truly earned entirely by autodidactic means, but a basic undergraduate and some graduate liberal arts education with most majors / concentrations should be.

Online discussion fora can take the place of classroom discussion.  Indeed, among many lifelong learners, social media often becomes such (when self-selected to eliminate the anti-social “troll” element).

This does not mean the end of traditional campus life.  Universities will still be needed.  Research graduate degrees, preparing the next generation of scientists and engineers will still require lab classes and R&D facilities to conduct doctoral work.  Advanced studies in the performing and fine arts will still want and need hands on instruction.  Many people will still want the on campus experiences that help take callow youth into adulthood.

However, it should be the job of every high school to prepare every student to become an autodidact for life, by using and encouraging these new digital course materials.  Every human being should have access to higher education at any point in their life.


Further External Reading:

Thinking Big

For the past seventy years, our greatest inventions and innovations have been realized by thinking ever smaller.  No, that’s not meant as a criticism, merely a witticism.  Back in the 1950s, Richard Feinman gave a lecture at CalTech entitled, “There’s Always Room at the Bottom”.  In it he extolled the virtues and opportunities of minaturization, of making things ever smaller, presaging Moore’s Law of exponential increase in transistor number in microelectronic integrated circuits.  He also talked about making miniture machines, challeging his listeners (and readers of his lecture series, of which I have the full set in our library) to build a motor only one sixtyforth of an inch.  He was far too modest in his challenge.  We long ago acheived motors measured not in fractions of an inch, but in nanometers.

While I don’t want to state anything so silly as to prognosticate the end of Moore’s Law (too many have done so and been wrong) I do lament that we as a world culture have seemingly abandoned the big.  For decades, our buildings and bridges have seemingly inched their way to larger sizes, but once upon a time, in the 19th and early 20th, we thought big and built big.  Collectively, we dared build projects so big that they staggered the imagination.  Sullivan built new skycrapers.  The Victorians built the Crystal Palace.  Paris built the Eiffle Tower.  All of these stretched and challenged both the imagination and the engineering skills of our best and brightest.

But as I described in my previous essay on the lamentable slowdown of development in the late 20th Century, it seemed to me that we abandoned the ‘big’ challenges.  Oh, we still build “big”.  But they aren’t amazingly big and audatious, not like the first Panama Canal was audatious.  With the possile exception of the Chunnel, the tunnel under the English Channel, we just don’t seem to think BIG anymore.  Consider, where are the domed cities we dreamed of in the mid-20th Century, the cities floating in the sky, the underwater cities?

So, purely as an excercize in creativity, lets think BIG for a brief moment, throwing out ideas, both good and bad:

Ever watch the awsome power of a thunderstorm?  Ever think about how much energy is being released?  Could we not capture and use some of that energy?  Consider building a several tens of thousands of feet high chimney that is a thousand feet wide.  When conditions in the atmosphere approach those conducive to generating thunderstorms, when the lifting index is high, a contained and sustained mini-thunderstorm will develop inside of it.  The air will rise inside of the chimney driving huge wind turbines.  Sized and positioned right, the mini-thunderstorm will be self-sustaining over long periods of time.  There are places in the world where this might be an excellent means of providing renewable energy.  The American South, especially Florida, would seem to fit the bill; imagine dozens of them along the Florida penisula.

Ever watch the tides move in and out of a bay?  In some places the amount of water and its speed could easily be harnessed to provide huge amounts of renewable energy.  Already, small scale tidal power projects have been or soon will be built.  But we should think BIG.  Consider the Sea of Cortez.  There are amazing tides there.  More interestingly, there are several islands mid-way that nearly block the northern and southern portions from each other.  The water moving here could be easily harnessed.  Imagine huge underwater turbines between these islands, providing power for the growing cities of Mexico.  There are other places in the world where these underwater turbines may be placed.

For decades, I’ve day dreamed of how to build floating cities above our already crowded city real estate, places that would probably welcome a bit of shade to reduce the thermal island effect.  Imagine huge blimp like bags of hydrogen lifting lace like lattice structures dotted with lightweight housing.  Anchoring guywires and cables carrying power, water, and sewage would tie them in place.  Gondolas would provide transporation to and from the floating city.  Imagine living in an apartment thousands of feet above the city, with panaramic views that stretched for miles.  The walls and furniture would be lightweight polymer foam to keep the lifting requirements minimized. Oh, and before you complain that hydrogen is dangerously flammable, you might want to read up on how HARD the British Royal Aircorp found it to ignite the hydrogen in German dirigables during WWI.  And about the Hindenburg disaster… that almost certainly was sabotage.  But in the case where no one trusts hydrogen as a lifting gas, we can always punt and use low pressure heated air.  It would cost more in terms of energy to maintain, but would be completely safe from fire.

So, its your turn.  Think BIG !!

Stasis Shock

Back in the 1970’s author Alvin Tofler’s book Future Shock explained the phenomena of how people and even whole cultures could undergo ‘culture shock’ when their society, including physical culture, changed too quickly.  It was something that folks of my generation discussed as a real issue that may face us in our lives, with projections that change would accelerate.  But in 1980, in a discussion with some college classmates, I came to the opposite conclusion.  We had grown up reading that the world would change at that accellerated rate and had already projected our selves into that imagined future.  But something was happening that had not been forecast.  The rate of change in the developed nations did NOT accelerate.  In fact, it appeared to have been slowing down.  This slowing down caused a great frustration in my generation that the forecast changes, from civil liberties, gay rights, to flying cars were nowhere in sight.

Consider for a moment the changes that had occured in my grandparent’s lifetime.  They were born in a world that saw the first cars on the road, railroads were replaced first by propeller driven airliners, then jets.  Radio came into their homes, to later become television, first black&white, then color.  They had seen women earn the right to vote.  Their house became airconditioned and the windmill that pumped the water from the well was replaced by an electric pump.  They lived through two world wars (or should we now call them simply World War, Phase I & II ?), the boom of the 20s, and the bust of the great depression. They saw on their TV rockets take men to the moon.

The generation of my parents saw that great depression and that horrible war which was ended by the begining of the nuclear age.  They had seen trains replaced by airliners, rockets to the moon, television replace radio, and the telegram replaced by the fax.  They had also seen the fight and eventual victory of the civil rights and second wave of feminism change the social structure of the family and the workplace.  It was if these two generations had seen too much change… and now, with Reagan taking the residence in the White House, “Morning in America” seemed more like lazy afternoon naptime to those of my generation.  It seemed as though that great generation was deliberately slowing things down.

We were frustrated and surprised.  I coined a term for it, “Stasis Shock”.

Except for the introduction of the personal computer, which was very much the brainchild of my generation, in fact, of my neighborhood crowd, nothing much changed for decades.  (One of my classmates was a sweet girl named Patti… Patti Jobs, had a smart older brother named Steve.  Another of my crowd was Don Fernandez.  Don’s older brother, Bill had two friends both named Steve, yes those two Steves, who he introduced to each other because they both were into electronics.)

Don’t agree?  Consider this, the Wright brothers invented the airplane in 1903.  The first commercial jet was delivered in 1958, only fifty five years later.  That Boeing 707 isn’t much different than the Boeing 747, the mainstay of large airliners.  The 747 came out in 1970.  Consider that for a moment.  The 747 is still in production, not just still flying.  It’s worse for small aircraft.  If you go to your small community airport today to take a flight lesson, its likely you will be flying a small plane designed in the 1950’s and very likely built and still flying since the early ’70s!  Now consider cars.  Except for styling details, I dare say that cars haven’t changed much since 1970 either.  We are still using essentially the same transportation system developed nearly 60 years ago, unchanged.

But about ten years ago I started to sense an acceleration in the pace of change.  More people had cellphones than land lines and those mobile phones were getting smarter and soon had better, higher resolution displays (which I had a hand in creating, natch).  We were listening to music and watching video that we had downloaded, rather than on physical media.  Some of the cars on the road were gas/electric hybrids and even some that were pure electric. People use their cellphones, not to call for a cab, but to signal for a ride through an app.  Our skies saw increasing numbers of drones, most importantly multirotorcraft.  Many of our crops were genetically modified, though a small handful of neo-luddites demonized the technology.  On the social front, things were finally changing as well.  Civil rights for LGBT people dramatically improved, with the Supreme Court declaring laws that criminalized being gay to be unconstitional in 2003 and the first State to allow same sex marriage did so in 2004.

The pace of change and improvement has accelerated throughout this past decade.  Same sex marriage is now the law of the land nationwide.  The FAA is developing Next Generation Air Traffic Control based on GPS instead of radar.  Drones are set to change multiple industries.  We will likely see doorstep delivery of packages via rotor drones within the near future.  Larger passenger multirotor drones will likely follow which will revolutionize mid-range personal transportation.  Self-driving cars are already being tested on the road and will soon become the norm.  Those cars will likely be all electric within a decade or two.  Most people probably won’t bother to own a car, but take advantage of ride-sharing.  Airliners will likely be hybrid turbine/electric.  We will see a resurgence in manufacturing of small planes that will be all electric and be able to fly from point to point, parking spot to parking spot, via autopilot tied to a computerized ATC.  Virgin Galactic will take you to the edge of space, while SpaceX will take you to orbital hotels.  Non-electronic articles will be 3D printed at home just before use.  Medicine will be revolutionized by Expert Systems tied to wearable health sensors.  Medical scanners and full panel assays will become so inexpensive that you will know your diagnoses before you feel sick.

The Great Stasis has been broken and we will see dramatic and rapid changes in the coming decades.  I’m coming out of Stasis Shock.  How about you?

Is Artificial Intelligence a Threat to Humanity?

Recently, pundits and scientists alike have been wringing their hands about the “threat” posed by Artificial Intelligence, even though we seem to be no where near being able to build anything remotely like a super-intelligence.

What I find fascinating is not what is being said by whom, but that those who are talking about this seemingly fail to take note of the fact that Science Fiction has been exploring this issue far longer and deeper than the technologists of today.  Somehow, its as if this topic was totally new, never before discussed.  I find this especially true in the area of ethics.  Seriously, why do we not see any discussion about such concepts as Asimov’s Three Laws?

Also, why is this discussion so pessimistic?  Why is that we don’t see the possibilities that AI will be a boon?  Instead, we see articles that the machines will “take over”, they will make us obsolete, they will self-perpetuate and be selfish, etc.  Why should that be the case?  Have they never read Asimov?  No, they have only seen that rip-off, twisted Hollywood version of I Robot.

Or they react with horror at the thought of super robots and drones in the battlefields of the future… and yet, not one of these pundits seems to have ever read Keith Laumer’s Bolo series?  Never read, Honor of the Regiment?  They haven’t read of the story of the first fully self-aware BOLO that saved a world by refusing a technically lawful, but unethical, order knowing that that refusal would trigger a computer worm that would eventually, in an hour or so, “kill” it?

I see a vastly different future in which AI, based on biomicry of the human (and other species) brain will be partners with us, capable of doing things that we can’t, like survive a thousand year journey to the stars to terraform promising worlds into new homes for humanity.  This is at the heart of my upcoming novel, All the Stars are Suns.

The Naked Brain

In the SciFi novel I’m writing, All the Stars are Suns, set into the future, neuroscience has advanced far enough that we can model and build biomimetic analogs of neural functions, to the point were we can fabricate inorganic artificial brains.  These brains, being biomimetic would even have human emotions, if we so chose.  Sounds too far fetched?  Obviously, I don’t think so, or I wouldn’t be including them in my story.  Although I believe that we are many decades, perhaps even centuries away from truly fabricating human like inorganic brains, the state of the art today is perhaps a lot more advanced than you might think.

For example, take a look a this video from NATURE:

I imagine that someday we will be able to convert images like this into wiring diagrams.  No, I’m NOT a transhumanist.  I do NOT believe that this will allow anyone to “upload” their personality and memories to an inorganic copy of themselves.  Instead, I foresee that through imaging many brains we will come to understand the basic functions of the neural nets and model them, even create physical instantiations of them, which will allow us to fabricate sophisticated neural net computers that function much as our brains do.  With experience, they will learn, just as humans learn.  They will be themselves, not carbon copies of us.  And they will have their own quirks, since they will not have had the ongoing learning experiences while their brains rewire themselves as ours do from infancy to adulthood.  They will be “born” already mature, though untutored.  I’m exploring the ramifications of that type of “growing up” in my novel.

(Addendum 10/8/2015:  Here’s a paper on the development of a computer simulation of a TINY portion of the neocortex of a young rat’s brain.  Note that they needed a supercomputer to run it.  Thank goodness for Moore’s law.  Maybe someday we will be able to run such simulations on computers available on a start-up company’s budget.  Someday, neuroengneering will be a ‘thing’: )

Ad Astra

I went to SXSW in Austin over the weekend.  I was there mostly as I was invited to speak at a dinner sponsored by Springboard Enterprises, of which I’m an alum, and Avinde, both organizations dedicated to helping women entrepreneurs start and fund their growth businesses.  I had the pleasure of spending the whole of Saturday with the incomparable Amy Millman, the spark-plug that keeps Springboard running.  She introduced me to a number of interesting folks including Mae Jemison, M.D., the visionary behind the 100 Year Starship effort.

For me, this was very interesting and a bit of a flashback to my college years in that my roommate, Joy Shaffer, M.D., also had an obsession with going to space, and most particularly, to the stars.  Joy had later shared with me a masterful analysis that she had undertaken to calculate when it would likely be economically and technologically feasible to go the stars and another analysis of when it would be likely economically and technologically feasible to terraform Mars.  Both, according to her analysis, would coincidentally be feasible in roughly two hundred years.  That was starting 25 years ago, when Joy did her analysis.

Thus, Dr. Jamison’s goal would accelerate Dr. Shaffer’s predicted time frame by 75 to 100 years sooner.  Sad to say, I’m not likely to live long enough to see if Dr. Jamison can accomplish that acceleration.  I’m not at all convinced that Dr. Jamison’s ideas of how to accomplish this will work.  But I do believe that we will be going to the stars.

I just have very strong opinions on how we will be going there:  Seed Ships.

Science fiction authors love to invent ways around the basic issue that the stars are a long way off.  There are four ways that typically get talked about, Relativistic Rocket, Generation Ship, Seed Ship, and (magically) Faster-Than-Light (FTL).  I live in the real world… and FTL is not ever going to be feasible.  The concept of sending live humans at very high fractional C speeds via rocket is also out.  But a slow boat, going only a small fraction of C is very reasonable.  But trying to send green houses, livestock, and humans on single or even multigenerational voyages is just asking for the ‘demon Murphy’ to strike.  But a seed ship, where no live beings are sent, only an artificial intelligence and a very sophisticated bio-lab with stored DNA, is another matter.

I imagine such ships being sent out by the hundreds to stars with suitable worlds in the ‘Goldie-locks’ zone not too far and not too close to the star.  In my imagination, ever fertile, I imagine that we would select worlds more like Venus than Mars… ones with “too much” carbon-dioxide that can be terraformed with photosynthesis, pulling out the carbon, locking it up in life and fossil life.  I imagine that we would seed those worlds with high atmospheric floating organisms that would live, grow, reproduce, and die, falling to the surface, releasing oxygen and storing the excess carbon on the ground.  These organisms would be bioengineered specifically for this mission.  In time, hundreds, to thousands of years, the atmosphere would become thinner and cooler, allowing at some point, life to colonize the surface.  At that point, our seed ship would begin populating the surface with new lifeforms, extremophiles at first, and later more conventional single celled organisms… preparing for the day, when complex multicellular surface ecosystems can be established on the surface… including humans.

Ad Astra !!

Futuristics: The coming “job riots”

I was blessed by attending Los Altos High School in the early to mid-70’s.  In my Senior year, during the Fall Term, I took a class on Economics.  The class was conducted at what others would likely considered to be at the university level.  In the Spring Term of ’75, I was doubly blessed to attend a new and quite unique class, for a high school, “Futuristics”.  The class taught us various methods used to forecast probabilities into the future.  We learned Trend Analysis, Delphi Method, etc.  We were also exposed to concepts such as “future shock”, which is what happens to people who live through times of rapid social, economic, and/or technological change.  We were also taken on field trips to Silicon Valley R&D labs and had guest lectures and demonstrations of new technologies.  Combining these two classes in succession was a great boon to me, to ready me for my future career as a Silicon Valley technologist and entrepreneur.

My father having already helped me to start-up my first “real” tech company was also a big help!

I sincerely believe that such classes should be available to all high school students, but sadly, this is rare.

One of the class assignments in that Futuristics class was to write out possible scenarios, using the methods that we had been taught.  I combined my recent gained knowledge of economics with the techniques from the futuristics class to project what was already obviously happening in increases in productivity due to automation, combining what I had learned of robotics and computers from our field trips and guest lecturers from such notable labs as Stanford Research International (SRI) and Xerox PARC (yes, I too was exposed to GUI and the “mouse” in the early ’70s).  Other economists had long predicted that we would see the work week drop to under 20 hours.  I predicted the opposite.  I wrote that workers who had high tech skills would be working longer hours while workers without high tech skills would see progressively worsening employment opportunities.  I foresaw, and still foresee, such severe unemployment that we would see “job riots”.  So far, my predictions from 39 years ago are still on track.  We WILL see “job riots”.

I’m not the only one who sees this issue coming.  Please take a moment to watch this video, “Humans Need Not Apply”.  It spells out things very well:

After the social upheaval of these “Job Riots” will come a new social contract in many of the developed countries.  Capitalism will survive just fine, but a new minimum level of acceptable income will become a social right, new mechanisms for ensuring that those who are capable of working will find employment.  What they are I can’t foresee.  I suspect that different cultures will find their own mechanisms.  Those of us in cultures that venerate the so called, “protestant work ethic” and “rugged individualism” will likely create quite different social contracts than other countries that enshrine more communitarian values.

For an interesting alternative take on the issue, you may wish to read this article:  “How 21st Century Cities Can Avoid 20th Century Detroit’s Fate”

For more on the debate: How will today’s technology change our concept of “work”?