Ever since the early 20th century when primitive analog computers were built to help compute solutions for naval gunnery fire control and increasing bomb accuracy, computing machinery has been used for weaponry. This trend continues to accelerate into the 21st century and has become an international competition.
Once upon a time
I had an early gift for mathematics and understanding three-dimensional form. When I was 16 or so, I helped my dad understand and then solve specific problems in spherical trigonometry. It eventually became clear to me that I was helping him verify circuitry specifically designed for suborbital mechanics: inertial guidance around the earth. Later I found out in those years he was working on the Poseidon SLBM for Lockheed, so, without completely understanding it, I was actually working on weaponized computation.
This is the period of my life where I learned about the geoid: the specific shape of the earth, largely an oblate ellipsoid. The exact shape depends upon gravitation, and thus mass concentrations (mascons). Lately the gravitational envelope of the moon caused by mascons has been an issue for the Lunar Orbiters.
At that point in history, rocket science was quite detailed and contained several specialized areas of knowledge. Many of which were helped by increasingly complex calculations. But there have been other fields that couldn't have advanced, where specific problems couldn't be solved, without the advances in computation. Ironically, some basic advances in computation we enjoy today owe these problems for their very existence. Consider this amazing article that details the first 25 years or so of the supercomputing initiatives at Lawrence Livermore National Laboratory.
Bombs
Throughout our computing history, computation has been harnessed to aid our defense by helping us create ever more powerful weapons. During the Manhattan Project at Los Alamos, Stanley Frankel and Eldred Nelson organized the T5 hand-computing group, a calculator farm populated with Marchant, Friden, and Monroe calculators and the wives of the physicists entering data on them. This group was arranged into an array to provide one of the first parallel computation designs, using Frankel's elegant breakdown of the computation into simpler, more robust calculations. Richard Feynman, a future Nobel prize winner, actually learned to fix the mechanical calculators so the computation could go on unabated by the huge time-sync of having to send them back to the factory for repair.
I was fortunate enough to be able to talk with Feynman when I was at Caltech, and we discussed group T-5, quantum theory, and how my old friend Derrick Lehmer was blacklisted for having a Russian wife. He told me that Stanley Frankel was also blacklisted. Also, I found 20-digit Friden calculators particularly useful for my computational purposes when I was a junior in High School.
The hunger for computation continued when Edward Teller began his work on the Super, a bomb organized around thermonuclear fusion. This lead John von Neumann, when he became aware of the ENIAC project, to suggest that the complex computations required to properly understand thermonuclear fusion could be carried out on one of the world's first electronic computers.
Codebreaking
In the history of warfare, codebreaking has proven itself to be of primary strategic importance. It turns out that this problem is perfectly suited to solution using computers.
One of the most important first steps in this area was taken at Bletchley Park in Britain during World War II. There, in 1939, Alan Turing constructed the Bombe. This was an early electromechanical computer and it was specifically designed to break the cipher and daily settings used in the German Enigma machine.
This effort required huge amounts of work and resulted in the discovery of several key strategic bits of information that turned the tide of the war against the Nazis.
The mathematical analysis of codes and encoded information is actually the science of decryption. The work on this is never-ending. At the National Security Agency's Multiprogram Research Facility in Oak Ridge, Tennessee, hundreds of scientists and mathematicians work to construct faster and faster computers for cryptanalytic analysis. And of course there are other special projects.
That seems like it would be an interesting place to work. Except there's no sign on the door. Well, this is to be expected since security is literally their middle name!
And the NSA's passion for modeling people has recently been highlighted by Edward Snowden's leaks of a slide set concerning the NSA's metadata-colecting priorities. And those slides could look so much better!
Passwords
In the modern day, hackers have become a huge problem for national and corporate security. This is partly because, recently, many advances in password cracking have occurred.
The first and most important advance was when RockYou.com was hacked with an SQL injection attack and 32 million (14.3 million unique) passwords were posted online. With a corpus like this, password crackers suddenly were able to substantially hone their playbooks to target the keyspaces that contain the most likely passwords.
A keyspace can be something like "a series of up to 8 digits" or "a word of up to seven characters in length followed by some digits" or even "a capitalized word from the dictionary with stylish letter substitutions". It was surprising how many of the RockYou password list could be compressed into keyspaces that restricted the search space considerably. And that made it possible to crack passwords much faster.
Popular fads like the stylish substitution of "i" by "1" or "e" by "3" were revealed to be exceptionally common.
Another advance in password cracking comes because passwords are usually not sent in plaintext form. Instead, a hashing function is used to obfuscate them. Perhaps they are only stored in hashed form. So, in 1980 a clever computer security professor named Martin Hellman published a technique that vastly sped up the process of password cracking. All you need to do is keep a table of the hash codes around for a keyspace. Then, when you get the hash code, you just look it up in the table.
But the advent of super-fast computers means that it is possible to compute billions of cryptographic hashes per second, allowing the password cracker to iterate through an entire keyspace in minutes to hours.
This is enabled by the original design of the hashing functions, like SHA, DES, and MD5, all commonly used hashing functions. They were all designed to be exceptionally efficient (and therefore quick) to compute.
So password crackers have written GPU-enabled parallel computation of the hashing functions. These run on exceptionally fast GPUs like the AMD Radeon series and the nVidia Tesla series.
To combat these, companies have started sending their passwords through thousands of iterations of the hashing function, which dramatically increases the time required to crack passwords. But really this only means that more computation is required to crack them.
The Internet
Many attacks on internet infrastructure and on targeted sites depend upon massively parallel capabilities. In particular, hackers often use Distributed Denial of Service (DDoS) attacks to bring down perceived opponents. Hackers often use an array of thousands of computers, called a botnet, to access a web site simultaneously, overloading the site's capabilities.
Distributed computing is an emerging technology that depends directly on the Internet. Various problems can be split into clean pieces and solved by independent computation. These include peaceful projects such as the spatial analysis of the shape of proteins (folding@home), the search for direct gravitational wave emissions from spinning neutron stars (Einstein@home), the analysis of radio telescope data for extraterrestrial signals (SETI@home), and the search for ever larger Mersenne prime numbers (GIMPS).
But not only have hackers been using distributed computing for attacks, they have also been using the capability for password cracking. Distributed computing is well suited to cryptanalysis also.
Exascale weapons
Recently it has been discussed that high-performance computing has become a strategic weapon. This is not surprising at all given how much computing gets devoted to the task of password cracking. Now the speculation is, with China's Tianhe-2 supercomputer, that weaponized computing is poised to move up to the exascale. The Tianhe-2 supercomputer is capable of 33.86 petaflops, less than a factor of 30 from the exascale. Most believe that exascale computing will arrive around 2018.
High-performance computing (HPC) has continually been used for weapons research. A high percentage of the most powerful supercomputers over the past decade are to be found at Livermore, Los Alamos, and Oak Ridge.
Whereas HPC has traditionally been aimed at floating-point operations (where real numbers are modeled and used for the bulk of the computation) the focus of password cracking is integer operations. For this reason, GPUs are typically preferred because modern general-purpose GPUs are capable of integer operations and they are massively parallel. The AMD 7990, for instance, has 4096 shaders. A shader is a scalar arithmetic unit that can be programmed to perform a variety of integer or floating-point operations. Because a GPU comes on a single card, this represents an incredibly dense ability to compute. The AMD 7990 achieves 7.78 teraflops but uses 135W of power.
So it's not out of the question to amass a system with thousands of GPUs to achieve exascale computing capability.
I feel it is ironic that China has built their fastest computer using Intel Xeon Phi processors. With 6 cores in each, the Xeon Phi packs about 1.2 teraflops of compute power per chip! And it is a lower power product than other Xeon processors, at about 4.25 gigaflops/watt. The AMD Radeon 7990, on the other hand, has been measured at 20.75 gigaflops/watt. This is because shaders are much scaled down from a full CPU.
What is the purpose?
Taking a step back, I think a few questions should be asked about computation in general. What should computation be used for? Why does it exist? Why did we invent it?
If you stand back and think about it, computation has only one purpose. This is to extend human capabilities; it allows us to do things we could not do before. It stands right next to other machines and artifices of mankind. Cars were developed to provide personal transportation, to allow us to go places quicker than we could go using our own two feet. Looms were invented so we could make cloth much faster and more efficiently than using a hand process, like knitting. Telescopes were invented so we could see farther than we could with our own two eyes.
Similarly, computation exists so we can extend the capabilities of our own brains. Working out a problem with pencil and paper can only go so far. When the problems get large, then we need help. We needed help when it came to cracking the Enigma cipher. We needed help when it came to computing the cross-section of Uranium. Computation was instantly weaponized as a product of necessity and the requirements of survival. But defense somehow crossed over into offensive capabilities.
With the Enigma, we were behind and trying to catch up. With the A-bomb, we were trying to get there before they did. Do our motivations always have to be about survival?
And where is it leading?
It's good that computation has come out from under the veil of weapons research. But the ramifications for society are huge. Since the mobile revolution, we solve problems that can occur to any of us in real life, and build an app for it. So computation continues to extend our capabilities in a way that fulfills some need. Computation has become commonplace and workaday.
When I see a kid learn to multiply by memorizing a table of products, I begin to wonder whether these capabilities are really needed, given the ubiquity of computation we can hold in our hands. Many things taught in school seem useless, like cursive writing. Why memorize historical dates when we can just look it up in Wikipedia? It's better to learn why something happened then when.
More and more, I feel that we should be teaching kids how to access and understand the knowledge that is always at their fingertips. And when so much of their lives is spent looking at an iPad, I feel that kids should be taught social interaction and be given more time to play, exercising their bodies.
It is because knowledge is so easy to access that teaching priorities must change. There should be more emphasis on the understanding of basic concepts and less emphasis on memorization. In the future, much of our memories and histories are going to be kept in the cloud.
Fundamentally, it becomes increasingly important to teach creativity. Because access to knowledge is not enough. We must also learn what to do with the knowledge and how to make advancements. The best advancements are made by standing on the shoulders of others. But without understanding how things interrelate, without basic reasoning skills, the access to knowledge is pointless.
Hi Mark, been a while since I commented, and I recently created my own blog. I remember we discussed Kurzweil's technological singularity in past comments on your blog. Taleb's Antifragility math and my explanation of simulated annealing seem to refute Kurzweil's theory:
ReplyDeletehttp://unheresy.com/Information%20Is%20Alive.html#Knowledge_Anneals
The informational theme is relevant to this blog article on computation and also timely with the information revolution I see just ahead, e.g. 3D printing. I have refined my unified theory of the universe tying in the concepts of information theory. The matter as continuum concept may have ramifications on quantum computing.
http://unheresy.com/The%20Universe.html#Entropic_derivation
Criticism is welcome as it is still a work in progress.
Hope your health management is optimistic. My health had significantly improved in May and June but relapsed in July. I have the sensation in my head that I may have a tumor, but perhaps it is the neuropathy. I'm afraid to go in for an MRI, and preferring to focus on some short-term accomplishments while trying to increase my exercise and antioxidants. I think I am resigned to overdose of sleeping pills if ever my condition becomes too debilitating or painful.
I tend to agree that knowledge creation is the product of the accretion of knowledge and the result of random processes of cross-association. The realization of the cross-applicability of one concept into another field is exactly one of the methods that has served me well over the years.
DeleteTop-down knowledge creation is at best an academic exercise in knowledge tabulation. A corollary to this in the Internet era, is manifest in the need for different methods of education that are less dependent upon memorization and more dependent upon understanding knowledge frameworks and the basic structure of things. And, of course understanding *how* to access information. But let's look at a simple example. Though I am obviously a proponent of guesstimation and the innate ability to calculate, I cannot help but wonder exactly how relevant the teaching of a multiplication table actually is, when smartphones are ubiquitous and can help us calculate. Computers are, by their very nature, machines that extend our capabilities.
To build computers that can think like we do is an incredibly complex task. There is a *lot* more to human brains than fast parallel pattern matching. So I would tend to agree that Kurzweil is wrong in his prediction. But the relentless doggedness of human enterprise and problem-solving suggests that we will keep on trying to simulate the human brain. And achieve a better fit for cognitive computation in steps. Perhaps each idea takes us one step closer to that goal.
Significant problems in cognition still exist and challenge us. Even the massive increase in compute power, to the exascale, hasn't helped our modeling of cognition, though. Straight compute power won't help on it's own. The illusion that it will is the typical flaw of the Heinleins and Asimovs of the world. What if human though is simply ill-fitted to a GPU? We do have a good sense that human thought must be extremely parallel.
It's hard to imagine that we will tackle cognition-on-computer when a computer still can't understand the average bit of text or even do translation properly.
It's amusing that you would focus on 3D printing. Clearly a revolution is afoot there. Miniaturize it and pretty soon you're making reality. But in this vision of progressing from 3D printing to reality, there is the troubling problem in moving downscale to the nanotech level. It is encouraging (unless you're Bill Joy, who seems to believe that nanotech will end us) that we are making excellent progress with MEMS technology.
Take care of yourself. If you do have a tumor it's better to know. I have personal knowledge of people who took alternative paths (from the conventional medicine approach) and wound up regretting it before dying. Sadly, I also have personal knowledge of people who took the conventional medicine approach and the cure simply killed them. But there are fewer of these.
--Mark
Taleb's one sentence reply was he understands the point about probing opaque systems. Reminds me of Tom Hedges' one sentence directives :) I assume Taleb meant that in order for knowledge creation to be antifragile, it must be opaque from the top-down. His math and charts quantify this effect:
Deletehttp://longplayer.org/what/whatelse/letters.php
Even if computers can duplicate our cognition, my reasoning is they won't be able to outpace the collective creation of knowledge because it is inherently a chaotic fitness, opaque process thus that can't be accelerated from the top-down. Someone with 200 IQ couldn't optimize the daily decisions for 6 billion people. Nature is distributed for anti-fragility. In terms of my theory of the universe, there are infinite frequencies thus even a 200 IQ mind can't compute all the individual futures.
We can gain from top-down organization of systems, yet if they become too large their myopia and rigidity is a fragility risk. For example, closed source is much more efficient for highly creative production on local scale such as for GUIs where open source stumbles. Closed to open source is the gradient between what a focused individual or small group can see clearly and what the large group can anneal more optimally over the long-term with more chaotic fitness.
I believe edumacation (hehe) will move away from top-down instructed to aided (roaming teachers in a computer lab perhaps or an online accessible teacher) self-directed autodidactism employing a computer. When new economy jobs proliferate (robotics, nanotech, informational manufacturing, etc), the top-down education system will be seen as insufficient by the youth. Some youth already find the education system to be irrelevant for other reasons, but I wonder how they might be stimulated by exploring self-directed knowledge:
http://esr.ibiblio.org/?p=4987
The current 20-somethings have racked up the largest college debts ever predominantly on degrees that have not prepared them for the new economy that comes after the 2016 implosion of the global socialism sovereign debt bubble. Tangentially, I'm now more certain of this timing, due to understanding from Martin Armstrong that the current "deadcat" bounce in the USA economy is due to international capital inflows from Europeans and Chinese escaping the collapse or stagnation of their economies. As capital flows, speculators follow to form a speculative bubble. Most of the new jobs have been $13/hr in leisure, or $24/hr in socialized healthcare. Ultimately Obamacare will be about rationing (e.g. in the UK elderly go thirsty in nursing homes), as this is how socialism always dies due to a lack of funding or resources (c.f. Nazi Germany). The 30 year declining interest rate bubble has bottomed and now potential home buyers rush to lockin a mortgage thus furthering the bounce of the economy, but high interest rates will kill the global economy and shakeout the top-down socialism inefficiencies. Asia will be #1 (overall economy) as we come out of the ashes, because it has the lowest socialism spending (lowest government share of GDP averaging less than 25% including China versus 50+% for developed world). I believe the USA hitech sector will still be #1.
Not that I am against caring for people, it is just it has to paid for, so I would favor a more antifragile local funding where the community can better match its revenue to spending (variance of affluence of communities). Local governments can sell bonds, but they can't print money so after the muni defaults circa 2017, investors will be much more wary about buttering them with money. Also interest rates will be much higher thus limiting servicing the loan from Peter by borrowing more from Paul.
Do you know of any format and application that allows recording educational videos as objects on a timeline, so it is possible to edit the video socially? Think of open sourcing the videos on youtube. If not, this is one of the applications I plan to write once I get my new computer language operational.
I thought you might be interested in my work-in-progress thoughts about the universe, because you've written much about tranforms into the frequency domain and I'm proposing to view the universe as emerging from that domain to the time domain, which raises the implication that any finite spacetime period can't be 100% independent. Thus it can perhaps explain entanglement as a frequency domain effect.
DeleteYeah we really need to 3D print at nanoscale to be able to get for example the strength and smoothness of machined steel. Or more broadly as you put it, to create reality from Planck's scale up.
I am eager to finish my computer language and start coding in it. I think this coming revolution requires a step from C/C++/C#/Java/ObjectiveC that is long overdue if we look at the evolution of languages every decade or so.
Thanks for your concern. Yeah compare the outcomes of Lance Armstrong and Steve Jobs. I'm conflicted whether to maximize the quality time I have now or rushing into potentially debilitating operations and drugs. I am still able to be athletic at times in spite of head and body aches. And May/June I was pain-free and my athleticism was returning. My life is currently not well structured financially (e.g. no health insurance) and socially for returning to the USA (haven't been there since 2006). I suppose I can place higher odds on it just being neuropathy (which is incurable any way unless they can cure the cause and sometimes it goes into remission), yet I know the likely cause is high number strain HPV is carcinogenic within 5 - 10 years from the 2006 infection. I read they can't even test for these high strains especially in males. I am going to finish up on some work projects, then probably access modern medicine in a developed country. Interim I'm increasing athleticism as much as tolerable and increasing antioxidants and iodine (Kelp). CoQ10 is awesome for instant energy and regeneration on the outside of the cell. Recently started Bilberry + Lutein for my eyes. Too early to draw correlations since there are so many variables and changes, but seemed to cause some powerful effects, not all of which were initially positive. Btw, bloodtest in November 2012 said my viral load (lymphocytes) was too high but tested negative for typical STDs including Syphilis. Did show I had dengue in the past, but was never hospitalized for it. I had never in my life been heavier than 166 lbs and typically had difficulty maintaining above 160 lbs, yet now I am 181 lbs (a lot of muscle but also fat belly).
With reference to the fragility of knowledge creation: Invention is not as fragile as one might think (and it depends upon creativity, I would assert). Once something is demonstrated, then it doesn't really take all that long for someone else to accomplish the same thing, regardless of whether or not they know how it was done. This is because human brains are unique and each person seeks out a slightly different path to the goal. And it also is because they *know* it has been done.
DeleteDogged pursuit always reaches the goal. Remember Edison and his quest for a robust light bulb filament. He tried thousands of materials. Eventually he arrived at carbon fiber. He wasn't pursuing something that had been demonstrated, but it does illustrate that point. This is corroborated in Taleb's "Tinkering Bottom-Up, Broad Design" slide.
Bottom-up vs. top-down systems are akin to matrix management vs. a command-reporting structure. Results come much faster and with greater quality when they are organized right. Matrix-managed processes are simply less fragile. Microsoft's recent reorg is an example of following Apple on this score. But reorgs can be as traumatic as mergers.
Editable YouTube courseware? With a little organization it could work. Open Source is not very well organized. There is no replacement for cross-pollenation of ideas and direct discussion. If you want to make something happen, you can write the code but you won't generate the users. It takes organization, a plan, etc.
I am pleased that you are onto the concept of creativity as a quantifiable process. This is in tune with my thinking, of course.
Watch your health. If you have a tumor, you must look after it. Blueberries are an excellent antioxidant. Ginger is also good, but the blood-thinning agents are not. Try Zinaxin.
By open source in the context of video courseware, I mean the highest-level of the content isn't discarded by the lossy video file format, e.g. storing text as ASCII instead of pixels. Or in Painter-speak, don't flatten the layers.
DeleteMy plans for open source means lowering costs and increasing speed-to-market by sharing non-proprietary *MODULES* that cross-pollinate orthogonal software. In theory, modularity mitigates some of the cross-organization requirement. Theory and practice can diverge though. Programmers need to be paid for sharing modules.
We build proprietary technology and markets on top of the shared core, e.g. we build on top of the OS.
Discovered white tongue (possibly candida which might explain the difficulty swallowing), yet another sign that my immune system is overstressed.
http://articles.mercola.com/sites/articles/archive/2013/07/14/adrenal-testing.aspx
I need focus in on it soon and with these sort of experts in the developed countries. Reading the following gives me encouragement to seek out experts:
http://articles.mercola.com/sites/articles/archive/2013/07/13/burzynski-cancer-film.aspx
Invention is not fragile, because invention requires bottom-up. A single top-down superior brain would see invention as opaque because omniscience can't exist because computational power is not the only factor in creativity. Rather creativity is part serendipity because it is chaotic fitness to a dynamic world that has an increasing entropy. To be omniscient, you would have to be the entire world. This ties into my statement near the end of my universal theory, where I claim that an infinite speed-of-light would collapse reality to an infinitesimal point in spacetime, because future and past would be the same. We need friction in time, else nothing can exist. Omniscience is the antithesis of existential.
DeleteOur discussion caused me to add the following to my blog article.
Delete"The knowledge creation process is opaque to a single top-down perspective of the universe because to be omniscient would require that the transmission of change in the universe would propagate instantly to the top-down observer, i.e. the speed-of-light would need to be infinite. But an infinite speed-of-light would collapse past and future into an infinitesimal point in spacetime— omniscient is the antithesis of existential. In order for anything to exist in the universe, there must be friction-in-time so change must propagate through resistance to change— mass. The non-uniform mass distribution of the universe is mutually causal with oscillation, which is why the universe emerges from the frequency domain. Uniform distribution of mass would be no contrast and nothing would exist. Taleb's antifragility can be conceptualized as lack of breaking resistance to variance."
Code: so much is developed by people who just wish to have total control. But the opposite side of that coin is that it's so much easier to write when so many modules are already written: libraries. This is why libraries exist, and to me they are sometimes called APIs.
DeleteWhen developing, we modularize also, and this can result in multiple levels of PIs.
So many people believe that all information should be free and available, no matter the subject. And it's the idealistic pie-in-the-sky of the coders who wish they could develop something. But that's off-subject.
Let's see why pure development focus can't solve the application problem.
Development is hard, and requires thought. Real work. And, as mentioned above and so many times before, it requires incremental improvement that takes a huge amount of work and thought that integrates over time into a finished product.
But what many developers don't realize until too late is that the *users* actually make the product (here "make" does not mean "write", of course). The real refinement comes in the churning, turbulent waters of actual use and bug reporting. That's what you need to institutionalize: the beta cycles.
Without it, coding is just, well, coding. Coding doesn't develop into products without the users.
The ecosystem of coding by itself is really something that will continue to change until we understand the real process of development, the entire cycle. I learned early that pure intelligence-driven coding is really just spinning wheels. When we build products for ourselves, their application to the real world is incredibly limited.
Because omniscience is not scaleable. It's really an urban legend. Nothing can exist in and of itself when it comes to development. Usefulness is shared and something the rest of the users contribute to the developer, in a sense. To put it in modern terms, the understanding and applicability to workflow is crowdsourced.
That being said, helping people code is an art, a worthy pursuit. But it can get sticky when developers that believe their hard work is worth something. And when ideas are assignable to larger corporations via patents it can get even more sticky.
The most amazing thing is that the users don't get their share. Some users contribute to understanding workflow and usability more than others. I know because I used to hire them. This is why your system needs to incorporate the beta cycle in its remuneration model.
Resistance to change: when an application gets used by enough users, then there can be a real problem when the UIs and the workflow changes. Photoshop has consistently made this mistake. Eventually a system must fail because it becomes too large to change. Like Windows.
DeleteBTW did you see that $900M charge M'soft had to write off because of excess inventory of the "Surface with Windows RT"? That's called consumer-market failure. Now they had better shore up their enterprise or the long slide down is going to be a LOT faster.
BYOD, Google apps, even Apple's own iWork are putting pressure on their enterprise division. But even worse, the mobile revolution is going to be the wooden stake in their heart if they don't watch out. The prices of office units will come down, thanks to the App Store and its egalitarian pricing. Scaleability has just ended for Microsoft. They're screwed.
I entirely concur with all your points. One of my key epiphanies (in theory) is the modules have to get smaller and finer grained so they become orthogonal to the increasing entropy of the users. This requires category theory and pure functions. That is why I am creating a new language. I think I finally settled on the grammar. And the category theory aspect is playing well with the fold comprehension (FROM expr DO expr IN expr) that makes it programmer friendly.
Deletehttp://copute.com/dev/docs/Copute/ref/std/Functor.copu
http://copute.com/dev/docs/Copute/ref/std/Applicative.copu
Functors appear to be your answer to C++ templates. I always thought templates were kind of ugly, really. But I understand the necessity. Still, mapping one type into another is actually kind of tricky. Even something simple, like mapping integers into floats has it's problems.
DeleteThe best use of this capability I have found is in the use of hoppers. Hoppers are a data structure for maintaining the "n" best items when doing a search. Here the word "item" means an abstracted object (perhaps candidates for the solution to some sub-problem). And the operation "best" can vary quite significantly, and there can be several "best" measures even for only one class of "item".
So a hopper is like a bag with some measure of bestness and also no need to keep them all around. Usually a hopper doesn't need to keep a large number of items. Meaning it's implementation is exceedingly simple. And usually doesn't need to be more than an insertion sort.
Hoppers appear to be a fuzzy-logic approach to abstraction?
DeleteCategory theory identifies mappings (morphisms) between types (objects of the category) that obey some rules that define a category (homomorphism, etc). For example, monoid abstracts any type that has a 0 (identity) point and an inductive structure, e.g. integers start at 0 and the append operation is to add 1. The identity point of a list (singly-linked queue) is the empty collection, and the prepend operation is to insert the item (of type T) at head and set tail = preexisting head. Note integers are not type parameterized, but lists are, e.g. List<T> (a.k.a. List[T]) where T is the type of the items in the collection. Monoid abstracts and modularizes this category of types. Functor and Applicative are categories that abstract parameterized types, allowing the functions that operate on unparameterized types to automatically (no boilerplate) operate on parameterized types. This sounds like a trivial accomplishment, but I explained it eliminates boilerplate and more saliently prevents a combinatorial explosion of overloaded operators. Even more saliently, this is about higher-kinded modularity. Note the 'Sub' in the linked code examples in my prior post is really a higher-kinded type. I made a summary post. I read that C++ templates can do higher-kinded typing, but it is apparently very convoluted.
The named tuple as a category. Some relevant comments about the Rule of Least Power. Also note that fuzzy logic type or dynamic typing (a.k.a. unityping) requires exceptions, which kill concurrency, composition, determinacy. Local state kills concurrency, composition, and determinancy. Lack of typing kills the category.
I am talking about completely changing the way we write APIs for GUIs for example. With pure functions (eliminating local state...try to compose code that sets callbacks), we don't call a function to set an event callback. We invert the logic, and the event is passed into a pure function which returns the state of the GUI based on the inputs.
Ah here is the fugly verbose C++ templates for higher-kinds.
DeleteBack to your point about inertia, I am visualizing (even users, e.g. a corporation or a group of hackers) being able to bolt different UIs and functionality onto to apps. The boundary between the app and the modules will become blurred. I want to turn up the pressure you allude to by orders-of-magnitude.
DeleteAlso the discipline of pure functions and category theory increases security, c.f. Joe-E.
Hoppers are necessary in heuristic algorithms. They can be used to quickly screen data before performing a more time-consuming examination of candidates.
DeleteI understand your concept of pure functions. I often use them in event handling myself. And this does cut down on code. But does having the minimum amount of code actually result in the easiest code to maintain? Opacity (from the point of view of a programmer trying to figure something out) is not necessarily a function of writing the minimum amount of code, I would assert.
I do have a little angel on my shoulder that often tells me that I should common-up two similar bits of code. But this sometimes results in issues clouding later understanding. Pure orthogonality was the goal, but opacity to inspection might easily be the result.
Security in the sense that: the less code there is, the easier it will be to prove it correct. This seems reasonable. But when straightforward code is expressed in an unnecessarily orthogonal manner, the number of interactions goes up exponentially.
Rather than reply in theory, I will get back with you when I have some code bases in this new paradigm and can report some real world experience. You may be correct, but I am putting considerable effort into attempting to achieve both increased orthogonality and greater clarity of high-level semantics. Category theory was one of the key insights (while avoiding what makes Haskell obtuse), as well as the new way of thinking about for "for comprehension". I have done a little bit of thinking about using continuations for high-level clarity when inverting the event logic.
DeleteSome more reasons for increased security in this paradigm (anything the programmer can do accidentally is a potential security hole):
1. Pure functions can't create (obscured) local state.
2. Foldable can't buffer overrun.
3. Category theory laws constrain the logic of the category.
The following blog post shows that Microsoft's chart pattern is going into terminal decline to concur with what you are seeing from the technical and largess inertia perspective:
Deletehttp://armstrongeconomics.com/2013/07/11/microsoft-why-nasa-dumped-windows-why-skype-must-be-dumped/
When he says "when cycle goes up", he is referring to the influx of capital rushing into the dollar and USA now, which will end 2015.75. His Pi model (3141 days ≈ 8.6 yrs x 3.14 ≈ 26 yrs x 3.14 ≈ 78 yrs x 3.14 ≈ 224 yrs) has never been wrong since 1987. Those cycles have predicted all major events in history.
This isn't nonsense. I've studied this model. I will blog about it later when I have time.
http://armstrongeconomics.com/2013/07/12/where-will-capital-go-after-2015-75/
http://armstrongeconomics.com/2013/07/13/dow-the-future-2/
http://armstrongeconomics.com/2013/07/19/major-v-minor-what-is-the-difference/
http://armstrongeconomics.com/2013/07/19/ecm-872013-turning-point/
http://armstrongeconomics.com/2013/07/22/why-the-world-economic-must-collapse-this-is-important/
http://armstrongeconomics.com/2013/07/18/france-the-224-year-cycle/
Charts do not explain Microsoft's issues. They have to do with the disruption of the PC base with a mobile non-Intel base, and the burning desire for Microsoft to achieve Apple-like profits.
ReplyDeleteTheir reorg, as traumatic as a merger, is being done simply to put Microsoft into a functional organization like Apple. So people can work on both desktop and gadgets within the same division, rather than competing. This is like module theory. Don't duplicate effort.
Their really big problem is this: they were structured as a software monopoly, and so they could afford to book their revenues based on sell-in to the channel. It was really hubris, and not a suggested approach, if you listen to the FASB. It allowed them the sheer glee of booking revenues which corresponded to shipping holographic stickers while posting nearly pure profit.
But the fate of Windows Vista and Windows 8 should have turned their hubris around.
Instead they posted unrealistically large quarters, forestalling the effects of the post-PC contraction.
This singular error led them to ship millions of Surface RT units into the channel. The $900M charge they wrote off in the most recent quarter is simply the difference in value of those units after dropping their price.
They should have perfected their products and ecosystem before attempting to compete. They should have hired better designers.
Perhaps they should rebrand the Surface RT as the Xpad, more in line with their successful Xbox. Windows means less and less to the consumer these days.
Microsoft's stock price chart pattern peaked circa 1998-9 after an exponential ramp. Microsoft is dying for one main reason-- the internet arrived. The internet (and the open source movement that created it) broke the chains of Microsoft's monopoly in numerous facets. You are describing the inertia that Microsoft attained from being nurtured by a monopoly. The chart pattern reflects this disruption.
ReplyDeleteThe Cathedral and the Bazaar (open source bible) was published in 1999, by the same open source guy who (and culture which) helped to create the internet in numerous ways.
Eminent Domains: The First Time I Changed History
World Without Web
A world without “ESR”
Apple is also being disrupted by the same open source phenomenon, because Apple's model is too exclusive and margin focused. Today I noticed the Samsung 5.8 i9152 dual-simm, 5.8" screen for $450 retail in the Philippines. Four times faster CPU, 3x more memory, 50% more screen, battery, and camera than the HTC Desire V I purchased one year ago at same price. Plus a front-facing camera. Dual-simm phones are very popular because prepaid model and 3 major carriers here. Apple has nothing for me. Users want big screens, because their smart phone is their main computing device. The tablet is waning, users want the flexible mobility of pocket-size.
If I achieve my goals, the language I am creating can help accelerate that the users get more dynamic choice than these top-down organizations can keep under their walled gardens.
Apple is a design company. You've appreciated their quality focus ever since I worked for you in 1993. But there is a bigger picture revolution going on.
I hope you get this paradigm shift and jump on it. The future is Asia (with its low levels of government spending and thus raw freedom, e.g. prepaid telcoms in Philippines), the billions, and openness. The West and shiny shrink-wrapped model are dying (along with socialism).
The $25 Rasberry Pi has sold 1.5 million units in less than a year. Runs Linux.
Any way, we probably have more to agree on when we stick with technological discussion.
I am extremely excited about the disruption underway. I hope I can live long enough to see the changes over the next decade or two.
Asia: Hey, good luck with that.
DeleteThe surviving ecosystem will show who disrupts who. Some ramp-ups are unsustainable, as evidenced by China's manufacturing slowdown. You can only cannibalize things so far before it begins to cause loss of muscle mass. Then it will be billions of people in trouble.
DeleteThe same people who created open source were predicting ultrabooks would succeed, in the form of a dumb computer with everything else on the web. But, quite the opposite, apps have dominated user experience nonetheless, pointing to native code and requiring computers to get more and more powerful. The revolution is that it can move into tablets and handsets with touchscreens.
Cloud computing and cloud data is growing fast, but it still has a long way to go. Trust in cloud infrastructure will be hard to earn with todays hacking climate.
The good news is that people will still build native apps, and egalitarian app store ecosystems are supporting the small developer and lowering the barriers to success for all of us. This will make a huge market for any tools that rapidly speed up development on the dominant ecosystem. If you look at time spent using, how much money is spent on apps, internet usage, and developer profit, this dominant ecosystem is still iOS by far. I just can't imagine what all those Android users are doing with their phones. And they still come with shovelware!
We agree on many of the details, perhaps I have different perspective on the big picture.
DeleteChina's capital infrastructure and export mono-economy (coupled with the huge levels of debt that has sustained their imbalanced model) is imploding. But unlike the West, their debt and imbalances aren't an institutionalized unfunded liability to a huge demographic bubble of boomers who are near the end of their productive life. As China's *PRIVATE* sector (banks) imbalances (and state run enterprises that are Communist party corruption) implode they retain a working age population and very low levels of direct government spending as a percentage of GDP (true for all developing nations, sort the linked table by "Government Spending"). When the 2016 - 2020 global economic implosion (which will be much worse than 2008 and government+central banks won't be able to contain it) wipes out the unproductive fat globally, the West will retain a massive liability of unproductive old people and not enough productive youth to pay for it (due to ubiquitous use of birth control pills starting from teenage and a portion of the 40 million abortions globally per year). Thus the West can't easily reset and cast off socialism, instead it will be a battle of attrition and capital flight (to the developing world) to avoid predatory taxation. China has a demographic inflection point coming too nearer to 2030, but in theory they can soften the impact (since they have much lower level of institutionalized promises to the elderly) with bidirectional Asia migration with the rest (other than Japan and perhaps Thailand and S.Korea) of which doesn't have this demographic problem. The coming economic implosion is likely to break up (some of) the control of the Communist party and a huge pent up potential of China could lurch forward after 2020 or so. Japan is nearing the bottom of their 23.5 year decline (declines have a duration of 26 years in the Pi model) and are now devaluing their currency to reset their external value in the global economy to make them competitive again.
Although the Chinese are (currently) more copy cats than creative originators (Chinese babies apparently don't have the gag reflex thus the social conformance may be genetic[1]), I read their uptake of robotics is accelerating. The USA will remain the technological and creative leader (due to our cultural affinity for independence and origination) as quality not quantity matters amongst the youth for this, but will occur outside the increasingly stifled larger corporations (which have too much top-down inertia) which are targets for the government to strangle with taxes (see Apple mentioned in linked article) due to the huge liability of people in the USA who can't compete and contribute (young and old). So Asia has better population-wide growth economics (more freedom to adjust), although they won't be the pointed technological originators. Samsung didn't originate as much as they copied with increased freedom to do so, e.g. when they saw Apple's iPad, they realized they could readily put a bigger screen on a smartphone. Apple should release a big screen smartphone asap. I read they are studying it. In my estimation, they better just do it immediately.
[1] I can cite the research if requested. SE USA indians (I'm a few % Cherokee) migrated from Europe thus are not Asian.
Let me make a (n obvious) suggestion for Apple if they aren't already working on it. The future is seamless computing between the mobile smartphone or wearable, and the increased CPU, big screen and I/O devices of the desktop. Seamless *instantaneous* (we don't want to wait to quickly dock to perform some task stationary, e.g. at the airport lounge then mobile again when boarding is announced) docking requires hardware and s/w integration-- Apple's forte.
DeleteBut the problem is that public venues are going to want to invest in docking stations for the majority platform (which globally is Android at 71% versus 23% for iOS). Apple could formulate a strategy to retain high margins while also interopting. Google is headed this direction with the Chrome desktop OS. Openness is a like a good virus-- it forces openness.
The key guy of open source (Eric S. Raymond) that I linked to upthread has been predicting the above docking future, so it isn't accurate to claim that the open source movement was all about dumb X terminals. HTML5 was apparently seeded by Ian Hickson, Daniel Glazman (worked on XUL, XBL), and David Hyatt-- the latter two come from Apple (the prior two I have butted heads with in the past because I never believed HMTL was the solution for everything and I predicted more creative openness).
Android just passed iOS in web traffic. Android's demographics are less sophisticated (spend less on apps, and I think they probably install fewer apps too). A filipina in Hong Kong tells me recently to install Viber so she can call me for free, and she does spend on calling cards so in theory she would buy an app. Need and awareness of possibilities, along with income is rising in the developing world. So far most of these Android users in the developing world only need social networking and communication. But this is changing rapidly and Android will grow with that shift. Apple is minority represented. My anecdotal experience is that owning an iPhone is seen more as a status symbol in Asia.
I am hoping to make an impact on uptake of Android apps later this year as I enter the app market. I have some ideas for essential apps that many will download. For example, the standard phone book and message app for Android is lacking many features and even has big O problems (slows to molasses on my Desire V with 1000+ messages in the Inbox). Stay tuned... remember my Coolpage.com had a million users between 1998 and 2001 (confirmed 334,000 simultaneous websites by Altavista search and download.com reports 700,000+ cumulative downloads). Remember Jeff Stock (he and Lars from Treehouse consulted for Fractal briefly)? I paid him $30,000 in 2001 (when I was in eye surgeries) to code the Objects window in CoolPage (took him a week or so). He had a big O bug and I sent it back to him, and he didn't resolve it. I dug into the code and within an hour had changed one line of his code to fix the problem.
Big screens on smartphones are needed because if for no other reason many of us can't type on the small keys in portrait orientation on a 4" screen. I say make the screen as big as can still fit in a pocket. 5.5" to 6"? I need my smartphone to be portable, because for example it also serves as my music player in the gym (and I can communicate between barbel sets).
DeleteI am estimating that Apple needs an Android+iOS docking station to preemptively prevent being locked out of the public docking station wave.
To give you an idea of how unsophisticated the developing market is, yet how fast it is developing, I tested a high school graduate 19 year old filipina on typing (for doing outsourcing) and she had 40 typos for 9000 words in my Flying Cars english language (with many words she doesn't know the meaning of) blog article. I told her to practice if she wants a job. But she is a reasonably fast typist and conversant with her Android, facebook, and other social networking apps. This girl is just a year or less from becoming more sophisticated. She had just quit a job working as a saleslady in Chinese goods store earning $2.50 per day for 14 hours. She could easily earn $10 per day taking random online oursourced typing jobs once she improves her error rate.
Back on this blog topic...
ReplyDeleteMark wrote:
"But there have been other fields that couldn't have advanced, where specific problems couldn't be solved, without the advances in computation."
For your readers, the Four Color Theorem I think was the first math proof that required computerization.
Interestingly this relates to dependent-typing in computer programming.