To answer these questions and others that shape the future, let's look at a concrete example of technological advancement and see what it tells us.
Display Panels
In 2007, when I got my first iPhone, I knew I was holding the future in my hands. And when the iPad arrived, it seemed that Apple single-handedly propelled us into the 24th century. But these inventions also depended upon the relentless advancement of technology: capacitive touch panels, software and hardware for multitouch processing, thin display panels, battery technology, architecture for economical power consumption, MEMS, and so many other cool things. We will look at thin display panels for a moment just to get an idea of how technology advances. This will give us a time frame that we can use to understand how fast the future might arrive.
Thinner, brighter display panels that consume less power, clearly necessary for smartphones and tablets, are one invention that has taken years and years. Let's consider the timeline from conception to real-world commercial availability.
George du Maurier's illustration in Punch, 1879 |
On December 9, 1878, George du Maurier's sketch for the Telephonoscope appeared in the Punch Almanack for 1879, which showed a window-sized display of video transmitted from another source, and it shows people talking to each other at a great distance, like FaceTime. Although it was intended as a spoof of Edison's inventions, it indicates that people were thinking of this as something they wanted.
Philo Farnsworth and the first television |
But I remember our first color TV when I was a kid, and it was quite large, and even had tubes inside. Well, all CRTs have at least one tube, the Cathode Ray Tube it is named for. At some point CRTs were replaced almost entirely by flat panel displays. Did that happen right away?
Not at all. The first flat plasma panel displays were introduced in 1964 at the University of Illinois at Urbana-Chamapign. It took 33 years until the first color plasma panel display as introduced by Fujitsu.
LCDs have been researched since the 1880s but LCD panels didn't start appearing until 1972 when Westinghouse demonstrated the first active-matrix LCD panel.
Because technology marches on in separate but simultaneous paths, plasma panels were the dominant television flat-panel technology from about 2000 through 2008, when LCD panels finally took more than a 50% share of the flat-panel television market.
Now, companies are producing thin bright TVs that appear to be bringing us directly into the world of Total Recall, where the walls are just displays. Sharp Electronics is bringing us ultra thin displays with their factory that is building 10th-generation panels. Also, an ID card has been shown with an embedded OLED panel with a 3D display of the person, that is activated by RFID. Just like Total Recall.
In fact, the movie is now being remade, in part because its technology is realizable and just doesn't seem so much like the future any more.
Apple iPhone 4 |
So, to sum it up
- 161 years ago people first started transmitting images
- 135 years ago people first imagined having a transmitted image display on a wall
- 84 years ago people first demonstrated an all-electronic display
- 67 years ago that television's commercial success began
- 48 years ago people first demonstrated a flat panel display
- 40 years ago companies first started marketing LCD panels
- 29 years ago Seiko introduced the first hand-held TVs
- 20 years ago portable computers first featured flat panel displays
- 16 years ago Fujitsu commercially introduced a 42" plasma display panel
- 9 years ago Kodak and Sanyo introduced the first AMOLED color panel
- 5 years ago Apple introduced the iPhone
My first point is that technology advancement definitely accelerates over time. My second point is that, also, sociological, political, and economic forces hold technology back. A third point, not specifically illustrated by the display panels example is that external requirements can force progress.
Why Technology Accelerates
My theory is that there is a copy effect, a synergy effect, and a forcing effect and together they accelerate technology.
One of the basic principles of technology advancement is that once a technology has been demonstrated, it is only a small amount of time before someone else can duplicate it. This I call the copy effect. Whether it happens because of stealing of information, or because there are a large number of clever people is a good question. People are motivated by the understanding that the advancement is highly desirable.
In 1945, the secrets of the atom bomb were smuggled out of Los Alamos by Klaus Fuchs and Sergeant David Greengrass through Harry Gold, and delivered directly to Julius and Ethel Rosenberg and from them to Anatoly Yakovlev, their Soviet contact. When there is desire, information finds its way out.
Today, information doesn't need to be smuggled. In order to transmit it, all one needs is an internet cafe. There is evidence that information doesn't even need to be encrypted to be disseminated widely. So all it takes is one whistleblower to move technological secrets.
Although it is not about technology per se, it can quickly be seen that Bradley Manning and Julian Assange were able to move large amounts of secret information very quickly through the WikiLeaks scandal.
There is another basic principle of technology advancement, demonstrated admirably by the display panels example, is that technology is created by standing upon the shoulders of those who have come before. I call this the synergy effect, particularly when it is accelerated by free dissemination of information. In other words, the internet.
Why synergy? With synergy, 2+2=5, or the sum is greater than the parts. When person A discovers something, and person B knows that, it is possible that person B can improve upon it in some way that makes it truly useful.
For instance, the invention of money enabled us to advance beyond a barter system. The invention of electronic exchange of money enabled banks to create commerce on a larger scale. But it wasn't until the invention of point-of-sale systems for transacting commerce, including credit, debit cards, and the systems for reading them, that the promise of electronic commerce became really useful for all people.
A third basic principle guiding progress is that necessity is the mother of invention. Once the telegraph was in common use, the need to convey emotion and intent forced the invention of the telephone. This is the forcing effect.
Many technological inventions have been made in order to gain the upper hand in matters of conflict. The creation of armor emboldened the knights of the crusade. Attacks by large numbers of people spurred on the advancements in defense: castles, heavy stone walls, towers, moats, and traps. Advancements in defense forced the creation of new technologies for advanced sieges, such as trebuchets, siege towers, and siege hooks. The American Civil War led to the invention of the Gatling gun and later the machine gun, which was prominently used in World War I. And then came the dawn of the nuclear age, when the atom bomb became the deciding technology that ended World War II.
It continues to this day, with man-in-the-loop systems, precision-guided munitions and bombs, and UAVs.
When you put these three principles together and into the hands of billions of people, it becomes impossible for technology to be held back. And, at some point, information spread will reach a maximum limit, where everybody knows everything as soon as it is known. But also notice that some events can simultaneously hold back and push forwards technology.
All in all, this is still good news for the future, if we survive it.
Why Technology Gets Held Back
Public sentiment is a very good first reason that technology can get held back. Right now, we seem poised on the brink of new methods of portable energy storage, like fuel cells. But the electricity required to generate enough hydrogen for mass fuel cell adoption is large. Where will we get the electricity? One technology that seems almost certain to be able to provide this electricity is nuclear energy.
But such events as Three Mile Island and Chernobyl, and more recently the effect of the March 11, 2011 Tohoku tsunami on the Fukushima nuclear power plant, are turning public sentiment against nuclear power. The dangers associated by the storage of High Level Waste (HLW) such as spent fuel rods are also widely known problems, and their implications for future generations cannot be ignored. This has led to the rejection of the Yucca Mountain facility in Nevada (though it's not over yet), and also to the creation of better HLW storage facilities, such as the Östhammar Forsmark facility in Sweden, which could be completed in 2015.
Political turmoil is a second reason that technology can get held back. As discussed earlier, World War II held back the advancement of television. It also held back jet engines.
Periodically, purges have caused huge destruction of information. The burning of the Library of Alexandria was one example and it is speculated that the plans for mechanical inventions, including perhaps the Antikythera mechanism for predicting astronomical positions, was destroyed accidentally by Julius Caesar in 48 BC. This disrupted scientific progress since huge stores of knowledge were lost.
When, between 213 and 206 BC, the Qin dynasty ordered the burning of books and then ordered more than 460 scholars to be buried alive, they however decided to keep the military technology.
Pressure from economic interests is an excellent third reason that technology can get held back. Existing investments in infrastructure can quickly be obsoleted by disruptive technology. Companies wishing to retain control over a market can buy up invention rights to prevent them from coming to market. Or simply suppress them.
For instance, General Electric engineer Ed Hammer invented the compact fluorescent light (CFL) in 1976, but GE failed to bring this device to market, or to prioritize its research. It is believed that they thought their incandescent light bulb business would be disrupted by such a technology. In reality, they might have owned that market for the many intermediate years before LED light bulbs were introduced. And saved the world plenty of energy in the meanwhile. But they were also selling nuclear reactors, you see.
It isn't a real stretch of the imagination to think that petrochemical energy companies might not want alternative energy sources to come to light. Some of these speculations border on conspiracy theory, but such incidents have certainly happened in the past.
Flying Cars
One of the most common predictions of the future is the flying car. In fact, we have flying machines today, in the form of airplanes. And we have magnetic levitation and induction, used in bullet trains. But to realize the flying car without using the ground effect or a rocket to keep it aloft (both rather a problem for those underneath it) requires something different.
It requires antigravity.
Anti-gravity seems like so much science fiction today, but what would it really entail? We know gravity is one of the four non-contact forces, alone with electromagnetism, the strong nuclear force, and the weak nuclear force. In the hypothetical Theory of Everything (ToE), the gravitational force is unified with the other three forces by a single theory that clarifies the origins of all forces.
If force unification can be achieved, then it may be possible to treat gravity like another force. There is some experimental proof that gravity travels in waves. This is because it is known that gravitation propagates at the speed of light. So, if gravity can be treated like electromagnetism, then perhaps it can be polarized or cancelled.
We always assume that a vacuum is empty, that space is completely devoid of all matter. Gravity waves are interesting because of how they must propagate: through the curvature of space-time itself. This implies that vacuum is not vacuum at all, but is permeated with energy (known as dark energy). In one theory, the Superfluid Vacuum Theory, space is actually made up of a Bose-Einstein Condensate, a dilute gas of weakly-interacting subatomic particles. This theory might be a basis for quantum gravity, which attempts to explain the gravitational force through the quantum interactions between these particles.
The duality of photons, tiny bits of light, as either particles or waves may also testify to the internal workings of space. Since photons can be polarized, it is not a stretch of the imagination to think that gravity can also be polarized, and thus components of gravity that act in a particular direction might be cancelled.
The discovery of dark matter, matter with mass but which doesn't interact with light or any other electromagnetic radiation, shows us that some kinds of matter can exist outside the Standard Model of particle physics, which in turn indicates that we have a lot to learn about physics in general.
Communication Through the Earth
The verification of quantum teleportation shows how communication between two entangled photons can be done. It has been verified through free space over distances of multiple kilometers. However, several problems exist that make the process currently unsuitable for transmitting classical information. First, only a quantum state can be transmitted. Second, the information is not transported instantly, but is instead transmitted at the speed of light.
Yet, at the end of the day, a quantum state does get transmitted between the two entangled photons without interacting with the intermediate space. This is clearly evidence for the non-Cartesian connectedness of the fabric of space-time, at least at the quantum level.
While this technology does not accomplish zero-time transmission, it does have the promise of transmitting information from point-to-point without the possibility of an intermediate interloper. Such a technique is extremely important to secure transmission, and would employ quantum cryptography, a two-key cryptosystem that is entirely based upon the entanglement of quantum states.
Using such a system, you could communicate with a satellite in orbit at arbitrary bandwidths, regardless of whether or not it was on the other side of the planet. And to intercept the information being transmitted, you would have to be at one end or the other. And even then, you couldn't get the information because it would be dependent upon highly-randomized quantum states, which are kept in sync at both photons at either end.
Perfect for keeping secrets.
To create such an entangled pair of photons, called an Einstein-Podolsky-Rosen (EPR) pair, you would need a source for single photons that operates at room temperature. NASA is sponsoring the creation of such a device.
Hi Mark,
ReplyDeleteW.r.t. to 'antimatter', are you aware of the theory of an entropic relation to gravity?
http://staff.science.uva.nl/~erikv/page20/page20.html
http://en.wikipedia.org/wiki/Erik_Verlinde
My TOE is that the dark matter is disordered (at ordered frequencies) beyond what is measurable due to Planck's constant. Due to the uncertainty principle, we can't measure such directly, only the side-effects. I have a more detailed explanation of my ideas in numerous comments on the following blog page:
http://esr.ibiblio.org/?p=3744#comment-327029
(above and below the linked comment)
I just found your blog today. You may remember I briefly did programming work on Painter X2 and 3.1.
Shelby,
DeleteI do remember; though you mostly worked with Tom on the PC angle of the apps. It was work greatly needed and respected.
I will, I'm sure, find your ideas interesting. I am someone who looks at things from a fresh standpoint, since I know a small amount about a lot of different things.
Personally, I don't hold out for the antimatter antigravity theory. It seems to violate certain symmetries and conservation laws. But we can't seem to get enough of it to test that hypothesis.
Annihilation energy could be a really good source of quick, plentiful energy for powering planes and such. Yet also a source of a crazy level of power for weapons. Which is why the military have been studying positron traps and why they have their tabletop terawatt goal.
I'm also not likely to believe that gravity can be polarized, though I do suggest it as a future possibility. And the effect should be measurable.
There are so many ToE theories. For instance, John Gowan likes to call matter an asymmetric, massive, immobile, local, conserved form of light.
It all comes down to what makes up space-time itself. Your theory about dark matter sounds plausible. I'm curious as to what you think dark energy is: the fabric of space-time. If we can tap it, then we might be in for quite a ride.
From your compression of time blog, perhaps the ride is accelerating. Humans don't perceive the exponential function in the early stage.
ReplyDeleteFascinating topic. Hope in future to dwelve into the math of relativity and try to see if I can develop a more concrete theory. I have rushed some half-baked understanding, but I am not satisfied with that. For now, I have some conceptual, speculative ideas.
I'm consumed with self-funded R&D on a s/w model for relativistic declarative programming with environment state. Every decade or so there is a paradigm shift in programming languages.
On another of your blog pages, I shared a link to a summary of some of my past ponderings, where I speculated (circa 2008 or so) that quantum entanglement may be due to resonance. The 170+ IQ 13 year old Jacob Barnett speculates that space-time may be curved.
1. Reality is different for each observer (perspective). The bug crawling on the ground doesn't know he is on a mountain (the curvative is hidden, i.e. isn't resonant to his Q...see Barnett's point).
2. Signal and noise are filtered relative to resonance. Tesla's insight also factors in, that attenuation in space is scaled by resonance. The wormhole in the space-time manifold may also be resonance.
3. Shannon-Nyquist sampling theorem tells us that we never know if have the true signal until we have infinite samples. If we could sample everything, nothing would exist, because the world would be static. It is essential to existence that there be infinite realities. Our perception of time exists, because there is change. If we could measure everything, time and change couldn't exist (from that master observer's perspective). Existence requires a coinductive, irreversible, infinite world. I have also related this more simply to why we can't exist with equal distributions (everything the same color, nothing can be seen), i.e. knowledge then won't exist. Software is the encoding of knowledge, which is why s/w is never static. The master observer of a free market can't exist. These concepts tie into economics, s/w engineering, everything. Very interesting to me.
4. So then order (entropy) is relative to resonance. Entanglement may be the perception of the tip of the iceberg of some order in that the dark matter. It is dark to us because our resonance is not tuned to perceive it all. What is dark to some us, might be simultaneously ordered to others of us. So it can be measured by those who are perceiving it, and we measure it by tips of the iceberg (there is the Nyquist aliasing, as quantum particles pop in and our of our resonance band).
5. I speculate that matter could be maximum disorder (trending to maximum). So the dark energy is matter that is ordered from another perspective. The yen yang that we can't perceive good without the existence of evil.
Perhaps there isn't much substance in my contribution.
As for tapping into the dark matter and polarizing it, I speculate it is an information science problem. Social networks scale very fast. Communication accelerates knowledge exponentially, because the individual brains begin acting like neurons of a larger brain-- like the ant which has a community exo-brain. I tie this back to my work on how to enable independent s/w development to compose. It is all about resonance. The Dunbar number indicates there is a complexity limit to how much each brain can contribute in isolation.
Collaboration is form of love-- the legacy of Fractal for me. We exist because we are not all the same.
Our progress is accelerating because of the synergistic properties of the factors that make it accelerate. It's quite a circular issue and also now is a great time to be around.
DeleteWe don't always work best in isolation.
Consider the shiny stones in the stream. They got there one way or another, we know. Was it the slow smooth erosion of the water passing over and around the stones that did it? Or was it the turbulence of the stones hitting each other?
Developing software alone can be like the slow smoothing effect of the water. It can be millennium-slow. But when you subject that software to a QA cycle, it gets beat on, and polished much much faster.
Developing ideas is also this way.
And spacetime has to be curved otherwise there wouldn't be a Hubble constant, I suspect. Its just not the kind of curvature that easily sits with our brains.
As I stated, the real question is what bends spacetime locally? What is expended to do this? What is gained and what is lost?
I don't believe that the curvature of spacetime is a "free" thing.
This thing about canceling gravity is really an issue. We can't just walk around with an earth-sized mass to make flying cars. Degenerate matter is just too dangerous to carry around in our pockets.
At least there is hope for communicating through solid matter though. And in the demonstration of this effect it indicates that some sort of signal can penetrate solid matter effortlessly.
Well, we knew already that weakly-interacting particles like neutrinos could already do the free space transmission thing. And detector technology is improving.
My crazy (half-baked) idea is that the perception of the order is bending spacetime. The cost is the reduction in degrees-of-freedom due to that perception, i.e. the maximum disorder of matter has to be conserved (1856 entropy of universe is trending to maximum). To exist one has to tradeoff other realities whose dependencies are not resonant (which you stated can even be deadly or dangerous). Composing s/w modules is a dependencies problem.
DeleteSo, if I understand correctly, fighting against entropy is actually putting kinks into the universe, which wants to unkink itself?
DeleteI suppose thats possible.
That entropy trends towards a maximum I never found surprising. Randomness seems to be the natural order of things. A consequence of having so many particles. But there are a few things like quantum tunneling and uncertainty that seem to be at the heart of randomness.
But I never thought of disorder as something that has to be conserved. It just seems to fall out from things like diffusion processes.
Perhaps dark energy contributes to it.
I gained some insight from the painting in your first sentence. Putting physically, because for our current perception (this body, etc), the physical world is most real to us.
DeleteAnd the blackhole a shortcut across the kinks?
Perhaps the blackhole is a bridge on the macroscale and entanglement on the quantum scale.
As you say, disorder fundamentally relies on the number of actors (and their independence). If universal entropy was not increasing, unpredictable change would eventually stagnate. I posit that predictable change would be equivalent to knowledge stagnating. C.f. the hypothesis I made in my comment further below about the imperfection of humans being essential ingredient for human creativity and free will. Is there a mathematical representation of knowledge?
Einstein posits that mass curves space. This explains gravitational lensing, sure. But space is curved less by less mass. The effect is inverse squared g*m1*m2/(d*d), so up close, small masses can have a greater effect, but it's not a huge amount. Think - to create 1G, we need 6 × 10^24 kilograms, so gravitation is a *very* weak effect.
DeleteThe consequence of this is that space curves in on itself and forms a singularity where we have black holes, because they are so massive. And some neutron stars might be dangerously close to this as well, depending upon their mass. The gravity gradient near a neutron star would rip normal matter apart, so its protons and electrons could separate from its neutrons - degenerate matter. The charged particles are cast off, since they form a less stable result. Here the universe is heading towards a deadly loss of entropy, since neutron stars are highly regular compared to other matter.
Having seen the TV series "Sliders" I am, of course aware that it sure would be nice if we could curve space and go somewhere else instantly. They loved to create wormholes with a handheld device.
This is called "instant elsewhere" in the SF literature.
Mathematically, knowledge can be converted to a "subject graph" with nodes that are concepts and arcs are the relations between them. It's worth a post someday...
Some insight on the cut off frequency between long-distance quantum entanglement and traditional gravity:
Deletehttp://www.youtube.com/watch?v=yk_Yy6TqgJs&feature=player_detailpage#t=2353s
(jumps to the 39 min point in the video)
See also:
https://groups.google.com/d/msg/scala-debate/vysv97J0xok/l1boM1tvpZgJ
Less abstractly, paradigm shift the problem set from spacetime to the entropy (information) domain. One possible way to levitate is connect the sensory perception of the brain to a computer that shifts the reality in space. (I had a comment somewhere on the net about how creating multiple copies of information escapes the spacetime manifold.)
ReplyDeleteTo get the expected result, I suppose ditto for all the other actors in the environment.
So developing a s/w model for relativistic composition of declarative programming and the environment may be applicable to the challenging of polarizing gravity. My point being that gravity is just a perception.
Gravity is a perception that can kill us. If I jump. I will fall and hit the ground.
DeleteDecartes aside, the concept of programming the universe is not new, yet I suspect that we are far too naive to accomplish it in my lifetime. I hold out some hope for promising Gedanken-experimenten, though.
Multiple copies does indeed violate the conservation laws of our universe as far as we know it. However, once we break conservation, all sorts of things are possible. We are clever enough to think of it, but I suspect we are not clever enough to actually make it happen.
Still, my comment "what does the generation of gravity conserve" is valid.
Transforming the domain of the representation can enable perception of order that was hidden. Order that is present but perceived (measured) as disorder, is disorder relative to that observer. Your clever blog about creating seamless patterns popped into my mind. You mention doing a Fourier transform so you can manipulate the data in a different domain (then transform back). Desired order is achieved that was obscured prior to the transformation. The order was always there, but not relative to the observer who didn't use your transformation. This is a practical example of why I have the idea that the disorder of the dark matter may be harvestable with information technology. Perhaps we already are. I am thinking we have real world examples that measures of order are relative to the observer, because we can potentially achieve higher rates of compression as we add perspective. (the relationship between compression and entropy)
ReplyDeleteYes, I have often looked for transformative solutions, ever since I learned of the Fourier transform and the frequency domain.
DeleteBut dark matter and dark energy, the fabric of spacetime, definitely are interesting. And I agree that informationally, they may yet be pried open. Particularly because they are difficult to detect and thus measure. And so very few avenues are left.
There are those that posit that spacetime is composed of very small volumes, each of which has a small number of bits of information in it. But some of those bits are privileged and cannot be accessed ordinarily. We just need the keys.
When determining primality, you need to squeeze information out of a number until there is no information left that can render the number composite.
This same method may yet be used in evaluating the fabric of spacetime.
But here's an interesting question: in the universe energy and many other properties are conserved. Yet gravity seems to emanate from mass (as waves). So, I ask this: what property is lost from mass when gravity emanates from it?
I am aware of the Einsteinian concept that mass bends spacetime. Yet, that bending should take some property to do it. Nothing is for free. So what property is lost from mass when this happens?
Particularly concerning is that mass emanates gravity constantly without end. It's like it sets us a stable resonance in spacetime.
Our models of gravitation tell us what its effects are and accurately predict lensing and exotic things like black holes, yet we still don't really understand its mechanism. The wave model of gravity is only the start.
My sense is that gravity might be treated like light if we can understand its mechanism.
...and mabuhay, dude.
My crazy idea is that mass is giving up its disorder a/k/a degrees-of-freedom, and I am positing that mass is a perception of the observer. I am thinking that perception includes not only the physical senses, but this physical body we use to interact with the physical perception of the universe. Gravity can kill that physical body. Perhaps the information in our brain could be orthogonal to the physical body, say if we could download it into a computer.
DeleteHehe, mabuhay pud kababayan (translated). For filipinos, it is all about the love (translated). The marriage proposal.
Serendipity of meeting you again after 17 years.
Corrected link for Ipaglalaban Ko.
DeleteIt seems pretty hopeless for us to transfer information from our minds into a computer directly. Since we don't yet understand how brains work. I think neural networks are interesting, but probably about six orders of magnitude too small to be of any usefulness in real thinking at present.
DeleteYour thoughts on perception seem to be too Descartes-oriented. I'm not sure, for instance, that I buy that the fact that I can think is central to my existence. I think if I were a rock, than I would still exist, but simply not be aware of it. Yes, trees make sounds when they fall in a forest but nobody hears it.
And I don't think I'm qualified to profess knowledge of life after death. After all, I'm still alive. Faith and perception might just be two different things for me.
Still, what we perceive is clearly affected by the way we model the world. If our model is not sufficient, then we will always be perceiving in a box, with no way to look outside it.
And that's part and parcel to where ideas come from. And that's also why its good to try out theories that don't necessarily fit other peoples ideas of correctness or conformance.
Keep thinking impossible thoughts, Shelby!
Your upbeat free spirit is so inspirational.
DeleteI am thinking of your blog about balancing creativity and rationality. I recognize that we still have to produce in our current reality. I think creative people desire to stretch their dynamic headroom to the extremes on the continuum. And very successful people additionally try to experience the gradations on that continuum, focus to complete goals, and not just oscillate between extremes. I believe there is no perfect balance, each (unique) person adds something to collective knowledge and creativity. For example, you've alluded in your blogs that Tom Hedges' brilliant rationality was necessary for Fractal to ship products.
I wrote a response to the "Why the future doesn't need us" by Bill Joy who quoted Ray Kurzweil, who quoted Theodore Kaczynski (the Unibomber).
The collective creativity of mankind continually outstrips the number of programs that could ever be written, because humans leverage the computer to increase the collective knowledge of the human race. For example, my external memory is Google. The human mind is not in isolation. It is social and alive, and the combined neurons of all humans alive is always increasing. The rate of communication and number of connections on the human network is increasing exponentially by leveraging the increasing capabilities of the machines.
In my theory, existence is being able to perceive and interact with order (in some dimension, e.g. spacetime), i.e. order exists for the actor that can perceive it (a mutually resonant phenomenon). Some might say that the rock and the computer do not have truly sentient perceptions because they lack free will. If ever we do succeed in programming a computer to emulate the way we humans network knowledge, creativity, and free will, then I posit that we will have created another human, with imperfections and thus adding to the pool of humans. I suspect their is mathematical basis to the claim that the imperfections of humans are an absolutely necessary component of the human creativity process. Perfection would be static-- a rock.
Thus as knowledge increases perhaps humans might be able to design machines such that the human sensory perception may feel and think it is still in a body, but the actual physical location and actions may be transformed (not necessarily even in the same location or informational dimension). Then maybe we "program the universe" only in the virtual space (a new model of our existence). What happens in our current physical world may become much less relevant (since we aren't perceiving and interacting it in as much). Perhaps we can make a duplicate copy in the virtual space. I am thinking of people who spend much of their time immersed online games or facebook. As those outlets become more realistic, many more might fall in. Is it a good outcome? In virtual reality, I could see with two eyes again. And I can choose. Free will.
Bill Joy is a killjoy because of his half-empty attitude. I met him a few times when I was going to Berkeley. A hard worker, for sure.
DeleteIt seems that the force of will does define life, and someday perhaps a computer will be alive, but I doubt I'll be alive to see it. However, as I have noted, progress and technology accelerate.
Perfection is something that I probably wouldn't enjoy in any way. It's not human, for one thing. But that doesn't stop me from fixing mistakes.
Virtual space is interesting, for instance an old friend David Taylor is very active in Second Life. But there's enough to do in the blogosphere and in social media to keep me away from that - which I view as an online gaming experience. But imagine selling property online. Might be some money to be made there.
I see my kids become immersed in the online experience and I also realize that things are much different now from the way they were when I was a kid riding bicycle to the library (now largely obsolete) and listening to records (now replaced by an iPhone hooked up to near-field-monitors).
But things largely stay the same, nonetheless. Same patterns, different toys.
Social media like Tumblr and DeviantArt become the new malls that teens go to and meet online friends.
Runtime program behavior is normally deterministic (repeatable, non-random) relative to the program code as the observer, but I argue not relative to an observer which is the semantic model in the programmer's mind. A bug is an arbitrary, indeterminate, deterministic condition relative to the program code, but I argue it is a random aliasing error artifact for the programmer. (my entire argument is a bit more convincing)
DeleteI am writing a document about declarative programming (and proposing a more declarative state model) that posits this phenomena is due to the inability of the programmer to build a model (in his mind) of all code paths, because the programmer can't test all (Halting theorem).
I am not expecting you to agree with that terse summary. I will come back to provide a link.
Code does tend to be deterministic, although access to things like clocks to set the random number generator tend to confuse things a bit. Of course, I do that on purpose quite often.
DeleteA bug is indeed a perceptual mismatch, though.
Sometimes it helps me to think on a meta-level when it comes to code. But usually it's that I need to write a program to write the program, etc.
When it comes to proving that there are no bugs I do see the value in that. But usually this is impractical on real-world-size problems. I'm sure Phil Wadler would have a few things to say about that. He was always a rhetorical genius.
I composed a rough draft. My crazy ideas ;)
DeleteA recursive semantic meta-model to increase "declarativity", seems intuitively applicable because unbounded recursion is Turing-complete and thus is a model of the irreversibility of time and universal entropy (the environment).
My idea for the definition of "proving there are less bugs" (i.e. declarativity) in the real world context is to increase the resonance between the intended semantics and the actual semantics, i.e. eliminating ordering, duplication (and higher-level semantic) dependencies in the unintended (opaque) semantics.
Phil Wadler had an insightful comment immediately below mine on a related topic.
I am referring to the recursion "a program that programs a program" in metaprogramming.
DeleteThis comment has been removed by the author.
ReplyDeleteOur prior discussion was essentially incorporating the coinductive type Bottom (of all the type hierarchy in subtyping), which I have now proved is the model of our Universe.
ReplyDeleteMath proof that God (an outside observer of universe) is consistent with our reality.
Final proof:
https://github.com/ceylon/ceylon-spec/issues/186#issuecomment-9145391
Initial proofs start here:
https://github.com/ceylon/ceylon-spec/issues/186#issuecomment-9124932
Correction. Should prove that there is a coinductive categorical dual to the inductive Axiom of Infinity:
Deletehttps://groups.google.com/d/msg/scala-debate/vysv97J0xok/7WXkMPvjLI4J