Hypolinks: Mohenjo Daro

The city of Mohejo Daro exhibits a remarkable degree of urban planning. Houses were carefully arranged in evenly shaped rectangular grids. Even the sizes of their bricks were standardized. According to Aldrete:  “Smaller bricks were used in houses, and larger ones in the city walls, but both had an exact thickness-to-width-to-length ratio of 1:2:4. That same ratio is found underlying the dimensions of individual houses, public structures, and even entire regions of the city.” There’s no consensus whether they standardized for practical, religious or other purposes.
Either way, Le Corbusier would have loved it. Here is his plan for Paris, which he fortunately didn’t manage to execute on.
Le Corbusier remarked that “reinforced concrete provides me with incredible resources and variety, and a passionate plasticity”. I must be lacking imagination and be really slow on the uptake. I tried to have a couple of people describe the lens through which Le Corbusier’s works can be seen as “endless forms most beautiful” but haven’t been able to replicate those goggles. I’m quite looking forward to evolutionary-like methods generating more natural structures.
Speaking of top-down vs bottom-up planning, Rosenblatt’s original perceptron paper (the flagship paper of connectionism, and, I guess… shallow learning?) cites one Friedrich A. Hayek. I actually met one person who happened to read The Sensory Order and described it to me as “the most abstract thing ever written”. A copy is already ordered.
Hayek’s neuroscience background is interesting to me because the connectionist vs symbolic camps in AI strongly remind me of the “Hayekian” vs “central planner” approaches to the use of knowledge in society. In fact, there’s a whole host of contrasts along these lines
  1. Lenin vs Hayek
  2. Le Corbusier vs Jane Jacobs
  3. System 2 vs System 1
  4. Symbolic/algorithmic vs connectionist approaches
  5. Engineering vs evolution
  6. Zero to one vs scrum/lean
  7. Superintelligence vs Comprehensive AI services (imperfect but kind of)
  8. Analysis vs synthesis
Put this way, my generative model wants to add “9. Empire vs Resistance”. But it’s not so simple – the most impressive feats, from Parthenon to Apollo, require a great deal of top-down central planning. Alignment of the virtues of the two patterns of information flows seems pretty important in many domains.
Speaking of generative models, here’s an interesting back-of-the-envelope calculation by Geoff Hinton: our brain has 10^14 synapses, we live for 10^9 seconds (80,000 Hours, perhaps a sexy new brand?). This implies 10^5 dimensions of constraint per second, way more than you’re getting through standard supervised and reinforcement learning methods. This makes Hinton think that our brains do a lot of unsupervised learning.
Speaking of unsupervised models, I took a stab at using the small open sourced model of GPT-2 for automating research on R&D automation. Input is first paragraph of Eric Drexler’s technical report on “Comprehensive AI services”. GPT-2 certainly took it in unexpected some unexpected (rather post-modern) directions:
Studies of superintelligent-level systems have typically posited AI functionality that plays the role of a mind in a rational utility-directed agent, and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a basis for a different approach to understanding them: Today, we can consider how AI systems are produced (through the work of research and development), what they do (broadly, provide services by performing tasks), and what they will enable (including incremental yet potentially thorough automation of human tasks).
Unlike native automation, these AI capabilities include abstractions made by separate developers, used as a baseline for another emotional context. I discussed deep abstractions in an earlier section; unfortunately (and this time not for Large Cell AI), their conceptual emptiness bears little constraint on our current laws of measurement. Rather, these immaculately detailed complexes are free from any analysis of human decision-making and may refer exclusively to specific autonomous driving systems. In this way, we can harmonize the way “technology” evaluates preoperatively, and scuttle the impersonal roots of any human-driven autonomous becoming monolithic and poorly optimized. Non-bias fundamentalism — or distrust of current research on relationships between virtue, research ideology, and decisions — is easily evident. 
A global database of computational and long-term cognitive functions — including a cavalcade of paths that converge on inescapable values of good, bad, and indifferent to others — is the aggregate preest of computerized design. There is no overkill, unlike the Lofty Narcissus. Vana vara does not solve the problem of what makes scientists because, if it did, and hypotheses to model that are plausible, they are probably not more discrete than any Mason teacher,” Agnew writes thus ad nauseam. “So obviously, predicting what others will know about oneself or others contented with roles, deserving of greater or lesser consideration in any problem is not some one gritty attempt to curry favor with anatomists or markets or engagement at metalsa if the adviseings and docu-mali are best found at the expense of the closest analogies.” With computers, de-identified “more common” 19th-century “internet” or “by-mail system” users might share information with either print employees but deal with teachers as State Agents. This is analogous to the line of bookkeeping instruction for technicians with controllers installed for all kinds of tasks; it is a vicious cycle that generates empirically governed “task-as-watchdog” systems that would entail modeling the entire back catalogue of the devices I use in my retirement household.
Contrary to Christianity, where pain is seen as a mediating power, do not regard anything as distressing more than my enjoyment of or use of the items requested. The current large genome division problem (a paper I wrote over the weekend as part of a special issue of maintained sacrosanct-reading status) proposes three efficient ways of optimizing the greater \(\Gamma(α) )\) of

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s