From Jonathan D. Lettvin
Jonathan D. Lettvin 愚公移山
JERRY LETTVIN MEMORIAL POEM
|Science vs. Religion? Why?|
I have a belief. I believe that living things invest energy in predicting outcomes from preconditions. This entails observing the world, discovering repeating events, extracting value, forming a mythology to capture this process, identifying common abstractions with other processes, forming a science and, finally, sharing the mythology and science with others. Science is nothing more than a way of predicting outcomes from preconditions efficiently. I am often shocked at the animosity between the mythology and science communities. You NEED mythology to bind together observations for which a proper science has not yet been successfully formed. However, science is a belief system too; and one which does not undermine religion if seen as a value-add for the individual and group practitioner. We need to value mythology for its continued capture of not-yet-scientific observation by magic. We need to value science for its partial success at codifying the ability to predict mechanically without invoking magic. Mythology and science are both necessary. I find it sad that the communities fight over which has the better gift.
on Parsey McParseface (a critique of machine-driven semantic style)
Much so dislike I order algorithm required very sentences. Ub fact, U an agaubst ebfircenebt. Write I ? as will I : nevermore ∃ poetry and longer no I am 愚公移山 merely but /dev/borg.
Just in case you are inclined to prefer PMcP, here it is in ordinary text.
I so very much dislike algorithm required sentence order. In fact I am against enforcement. I will write as I will else nevermore there exists poetry and I am no longer the foolish old man who moved the mountain, but merely a borg device.
Text from earlier home page
Development Process 
Core/library programming is a six-phase process:
- Express a new idea in code and test that it works.
- Improve the expression so it can be used reliably.
- Put an interface on it so others can use it.
- Integrate it into an application where it is useful.
- Harden it against anticipated edge and corner cases.
- Harden it against unanticipated operational failure in the field.
Unicode Normalization and Rendering
Currently I am deeply interested in how to normalize, compose, and fit glyphs from Unicode codepoints into small footprint rectangles through Unicode rendering.
Big Data Mathematics
I attack/solve "insoluble" Big Data problems and generate unexpected successes (see my resumé for examples). I have loved big data since before it was named. I use novel mathematical methods where databasing would obscure features. I think about it, visualize it, detect events in it, and discover monetizable value in it where others see only noise.
To achieve my results, I develop high-quality, high-performance code that is small, fast, correct, complete, and secure. My code is targeted for zero-bugs, unit-tests, edge-cases, 100% coverage, and orders of magnitude performance boosts. I comment and document heavily. I prefer to collaborate, but work very well autonomously.
I have experience with many languages, editors, platforms, etc... Learning new tools rarely improves my problem-solving abilities and usually takes time away from solving really interesting problems with the tools I already have. Learning Python was one of those rare exceptions because a strong scientific community has evolved excellent, optimized, and easily used libraries. In the spirit of avoiding premature optimizations, Python is my preferred prototyping language.
When performance issues arise, I switch to C++ and Intel assembly. I use Virtualbox to sandbox environments. My preferred development environment includes linux, git, mediawiki, make, and gvim. I'm interested in gpu and map/reduce. I am experienced in architecting and implementing operating systems, editors, compilers, data ingesters, lattice math, and feature detection.
I learned assembler because I wrote C programs having output I could not understand. puts(NULL); is the simplest example (actually uninitialized pointers). What I saw were garbage characters. To debug this, I tried a variety of printf statements. One of them included hexadecimal formatting of multiple shorts. Some numbers looked familiar, and I recognized the interrupt vector table at the base of MSDOS memory. AHAH! Of course I got that output. Knowing this, I read the ROM BIOS source code from top to bottom and learned how the hardware worked. I also came to understand the C calling stack conventions of many different compilers. Eventually I wrote a Real-Time Operating System based on the knowledge acquired through my mistakes.
The reason I launched into this is to say that making mistakes is the single most important thing a programmer (or scientist, or anyone) can do. Doing what you know will work leads to the predicted outcome. How boring.
The most boring thing I have encountered in decades of software development is the didactic rigid insistence on language purity. Nonsense! For one thing, standards committees are highly political and yield questionable standards. Another thing is that compilers almost never deliver the standardized language. Then there are people who criticize you for getting work done the way you did because there is another way they think is better. TANSTATFC! Michael Abrash's "Zen of Assembly Language" is worth reading. Another book in the same vein is "Imperfect C++"; one of my most precious and insightful language books.
We live in an imperfect world. Knowing how to do only the perfect things puts the programmer at a disadvantage. Every computer is different. Every implementation of an algorithm is different. Each may have its advantages and disadvantages. Yes, it is possible to have low expectations of uniformity for processing in the cloud, and to make programs which survive failure and continue to success (check out NetFlix's chaos monkey and their other monkeys). Yes, in scala, one can use the "let it fail" strategy. But in the end, your knowledge of complexity and the ability to scare up the pertinent documentation and write a work-around to impediments is of infinitely greater value than the ability to avoid chastisement for abuse of language.
I love a computed goto when it is the right instruction. Which compilers implement it? The only one I know of right now is gnu c++. There may be more, but prepare to be scolded for using it. I learned about computed goto by mistake. Then I found that it is like a scalpel in the hands of a surgeon when writing lexers.
Most language designers like to forbid you to use portions of the assembly instruction set. Why? My feeling is that you should bow to authority to make them feel like they contribute, do things their way while you are being watched, and then go about secretly solving problems with whatever skills and tools you have available.
The only way to excel at all this is to make mistakes, not just to see the wisdom of your leaders' advice, but to see their folly as well.
- Exercise 1: Write some spaghetti code.
- Exercise 2: Write some data then execute it as code.
- Exercise 3: Write a novel boot sector to load the system.
Believe me when I tell you that these are common practices in the world of malicious hacking. Shouldn't you be on the front lines of the battle? Or are you going to wash your hands of responsibility and leave that up to hopefully somewhat competent security companies?
Are you going to be a compliant court officer or a clever field commander?
Design Patterns (editorial)
All code requires patterned thinking. Patterned thinking arose long before the modern classification system. I used most of the patterns before they were named. Lazy Initialization, Multition, Object Pool, RAII, Adapter, Composite, Decorator, Flyweight, Module, Chain of Responsibility, Command, Interpreter, Iterator, Memento, Null Object, Observer, Publisher Subscriber, State, Strategy, Servant, Template, Visitor, Active Object, Balking, Event Based Asynchronous, Join, Lock, Messaging, Monitor Object, Reactor, Read Write Lock Scheduler, Thread Pool, Thread Specific Storage . I find Design Patterns limiting and I find the naming conventions ambiguous. Much of what is called Style is actually a set of additional Meta-Design-Patterns.
I think of programming as a three-part process of creating an impedance match between a mechanism of inputs, a mechanism of transforms, and a mechanism of outputs. A problem occupies a "problem space" in which a "preferred language" can be designed. A "solution space" is the architecture that fits the problem space to available resources. It is often possible to have a closed form solution and a simple high-performance implementation. Sometimes Design Patterns can be used to guide development, sometimes they cannot. Sometimes a focus on Design Patterns interferes with discovering optimal solutions.
One of my better pieces of code in C++ is an LR1 lexer with two branch points. The closest design pattern is "Interpreter", but it is a stretch. It implements a closed solution to converting a string representation to an unsigned long long. It sounds like an easy problem to solve, but it is subtle. To achieve high performance with intrinsic edge-casing is a challenge.
Another piece of good code I wrote in Python is a Roman Numeral converter in any base between 7 and 60. This code solves its problem completely in that numbers expressible in Roman Numerals has both a lower and upper limit, so the solution can be tested by brute force methods. It doesn't fit any Design Pattern, yet it solves a problem using a useful pattern.
I would claim that the existing set of Design Patterns is missing some critical types. For instance, I would claim that code is incomplete without unit tests, edge casing, and coverage. Both of the above pieces of code use that pattern to achieve 100% coverage and error handling. Neither has dead code, extra code, or failing code. Self-Test is a Design Pattern.
Another missing Design Pattern is Disaster-Recovery or ACID. When a program fails, it often leaves residual errors that require cleanup. Some programs from the 1970s were designed to be 100% recoverable. The only possible losses arose from incomplete transmissions or storage losses and corruptions. Geographically distributed bit dispersion storage reduces the latter problem considerably. Databases are designed using ACID criteria (Atomicity, Consistency, Isolation, Durability). These critically important criteria are only partially conveyed in Design Patterns.
So, although knowledge of Design Patterns is a basic test of understanding coding I find that it is a low bar, and that the bar ought to be set higher. I have been in only one interview where the focus was on unit testing and edge casing. The resulting code tenpin was the result and, again, I use Self-Test as a design pattern. All of these modules exhibit another unnamed Design Pattern I would name Problem-Definition. A module is incomplete if it does not clearly express its problem space. Python enables comments to contain runnable unit tests. These can be used to illustrate the Problem-Definition.
Science observes and predicts repeatable observations using testable models. Science enables efficient use of resources. Science is ruled by math.
Where no model is known, mythology memorializes repeatable observations. Mythology enables retention of emergent science. Mythology is ruled by story.
Religion uses behavioral suggestion to aid stability and thriving both within one group and against another group. Religion enables efficient reflexive actions. Religion is ruled by belief.
Poor outcomes eventually arise when ignoring or distorting any one of these three. None of them offer absolute truth. 1+1=2 is the subject of arguments.
My clients/employers include Carbonite, Lotus, IBM, NASA, MIT, and many small high tech startups. I have patents, and publications, and have contributed to project success in many arenas. These include canonicalization, virus search, high speed lexing, discrete convolution/correlation, efficiency calculations, dimensional conversion, and automated generation of code and papers.
- political cause
- bias metrics
- triage for unintended bias
- The Rings of Earth
- Dev 20160112
- Proposition: Neurons as Selectable Shaped Antennae
Ab initio; I forswear obfuscation and abdure they who recommence this ratiocination to repristinate my dialectic. This soritical expedient exhibits didactic probity. However, in my supererogatory meticulisms I am unmeechingly orotund and aberrantly taciturn. I will adumbrate, but eschew the stultifying eristics of my peers. Not that my lapideric style is numinous; merely eosteric, but, mirabile dictu, brooks no mimetic prose. I am lachrymose that, amongst my officious proselytizers, good code evokes inchoate logorrhea.
Copyright©2011-2016 Jonathan D. Lettvin, All Rights Reserved