Author: Akshay

  • Artificial life is a self-repair problem

    I’ve found myself to be in quite a lot of agreement with Erwin Schrödinger’s view on life essentially being an open system which tries to keep itself ordered (causing negative entropy changes within its boundary) via homeostasis while exchanging energy and matter with its environment.

    The key word here is homeostasis: resistance to change due to external factors and keeping key processes well-regulated. But, in my view, this definition fails to capture a characteristic of life which is as fundamental as any of the other textbook-listed characteristics: the ability of an organism to repair itself.

    This is an ability shown by the simplest of cells to the most complex of animals. Now, why is this a key impediment for life that is truly artificial (i.e., one that does not rely on existing biological material / frameworks that nature has already figured out: proteins, phospholipids, RNA, DNA etc)?

    First of all, the best way we know of to encode objective functions (say, one of “survival” – whether that’s defined through maximizing time-lived or maximizing offspring-count among other things) is via mathematical constructs. And, perhaps, the best way so far we have found of encoding the algorithms to respond to stimuli is in the coefficients of a system of nonlinear equations (“neural networks”). Those neural networks are best deployed onto deterministic computing hardware built out of silicon (I believe these can be deployed into biological systems as well – but I’m not that well-versed with say, DNA computing and such). So, now you can have a machine with a silicon chip as its “brain” and motors / transducers as its actuators and various sensors as its “eyes and ears”.

    But here we run into a problem: does such a machine know how to repair itself? Even if it does, does it have the materials required to make more copies of itself or of its constituent devices readily available (by “readily available”, I mean not at prohibitive energy costs)? One can argue that life as we know it on earth was perhaps one of the most efficient ways to create an ordered system, resilient to exogenous forces, with the materials readily available on Earth. Put another way, the “activation energy” hill that life found was probably one of the lowest around on our planet, if one were to use the materials that were abundant.

    Therefore, at least on Earth, I believe any artificial life will look pretty similar (or let’s say “organic”) to the naturally-evolved life that we are and see around us. Not the titanium-clad robots that adorn many sci-fi movies. One could, arguably, have robots which have been initially provided with human-built giant robot factories, silicon foundries and all the chemicals/materials supply-chain that could go into building all the components to build more of such robots and the robots themselves know how to use them completely independently of any outside intervention. But that sounds supremely inefficient.

    Moreover, when you look at that giant setup, you can’t help but be awestruck by the fact that almost all living organisms just require an individual or two to fully create highly capable life!

    Now a couple of other stream-of-consciousness points that I have about life:

    1. I think reproduction evolved as a way to not be localized spatially as resources are spread all over geographically and one organism cannot access them all (rooted plants being the best example). It also makes sense from a survival perspective to have versions of your genetic information be resilient to localized natural disasters.
    2. The one puzzle I have is that where is the survival function encoded in organisms. I mean yes, it is in the DNA/RNA, but what reading of that code gives an organism a “purpose” to survive? And is that function encoded in entropic terms?

    One aspect that also bears mentioning is possibly the non-deterministic computational aspects of life: as Roger Penrose would have it, perhaps consciousness (and one can extend that to all of life) is inherently quantum in nature. In which case, this becomes an even harder problem to solve!

  • Jūzō Itami and Japan in the 80s/90s

    I am always fascinated by the bubble period in the Japan of 1980s. Most people who look at modern Japan see it as a place with a culture vastly distinct from most other civilizations that they’ve seen or experienced. While that is true in many ways, peel off a few layers and Japan is as human as the rest of us – the same moral questions, greed, corruption and cronyism show their face in many aspects of daily life that most of us, living outside Japan, are used to. Look no further than recent news clippings in leading Japanese news dailies and you’ll know what I mean. That said, the scale of these scandals and corruption is far smaller than we see in many countries outside of Japan and there is a certain sense of responsibility and societal accountability that is nowhere else to be seen.

    No postwar era exemplifies Japan’s “human” nature more than the bubble era of the late 1980s – immense wealth, corporate power and its nexus with politics, intricate dealings with the underworld and not doing right by society – traits easily visible in other cultures. If I could put it in one sentence, Japan was probably at its most individualistic in this period, putting somewhat into the background its image as a collectivist society.

    And if art is a reflection of society, then no art is a better representative of this era than Juzo Itami’s creations encompassing biting satire and the very human vulnerabilities of the constituents that make up Japan. I haven’t watched his most celebrated works, “Tampopo” and “The Funeral” yet but having stumbled upon “Marusa no Onna” (both parts), “Supa no Onna” and “Minbo no Onna”, I have been very, very impressed by these highly entertaining masterworks of satire.

    One of the most famous among these, “Marusa no Onna”, where Ryoko Itakura, a tax inspector (played by the impeccable Nobuko Miyamoto, Itami’s wife and lead in most of his movies), leaves no stone unturned on the path to bringing in the dough from tax-evaders, is a delight to watch. The subtle, often comic, interactions between the protagonist and what you could call an antagonist are amazing and fulfilling. I won’t reveal too many details here as the movie is worth experiencing on your own. The sequel goes deeper into other rotten parts of the system and is just as well-executed. Pricey and shady land deals in Tokyo being a pivotal plot point couldn’t have portrayed bubble-era Japan better. There is an interaction between Itakura’s boss and an elected official which would resonate amazingly well with those familiar with politics elsewhere.

    “Minbo no Onna” satirizes the underworld – an act for which Itami had to suffer some grave consequences in real life. It is just as ably crafted as his other works and again puts the spotlight on things perhaps others had been glorifying. Given the real-life consequences for Itami, it just shows that nothing weakens the powerful as satire does. “Supa no Onna” is among his lighter fare and would count as a comedy but again, a fun-to-watch movie about a beaten up supermarket’s revival by our omnipresent “Onna”.

    All said and done, it was a privilege to have discovered Itami’s work, whose life was cut short, perhaps too soon, in 1997. If you love good slice-of-life movies, you’d love Itami’s unique perspective on the failings of his own society at the peak of its economic power. And if you love Japanese culture, as I do, and its melting-pot nature in the 1980s (think “Maison Ikkoku”, if you want a manga reference), then just go ahead and watch these cinematic gems!

  • Hikari: a new game-engine

    During the Thanksgiving break of 2020, I bought myself the latest version of the OpenGL Programming Guide and other books on rendering and collision detection (Physically Based Rendering, Real-time Rendering, Real-time Collision Detection). I was planning to give a serious shot to build a game engine (and possibly a game) from scratch in C#, more as a hobby project rather than something useful for a wider audience.

    The last such attempt was back in 2010 with SlimDX and OpenTK (or was it TaoGL?) being the preferred managed-code bindings to the native graphics APIs. However, that project was abandoned after a while as I could not dedicate enough time to the project and my work was completely unrelated to graphics/game programming.

    Fast forward to 2020, the pandemic did provide me with an opportunity to think about areas in programming I really loved and also the time to take a shot at it again, fully aware that the way graphics pipelines now work is very different from when I was last really at the bleeding edge (in 2005-06, when Shader Model 3.0 had just arrived supporting branching (gasp!) and swizzling in shaders – and OpenGL allowed for it through vendor extensions: see this, for example).

    So, this project is as much of en educational exercise for me in re-learning the architecture of modern GPUs and the paradigms associated with rendering things with them.

    Of course, one major change since 2010 has been the advent of smartphones with very capable graphics hardware but very disparate support for APIs with OpenGL ES being left for the dead on the Apple side of things (Metal being the favorite there) and Direct3D nowhere to be found on phones. So, I’m also planning to learn Vulkan as I go along. My initial take is that it is far more involved (with managing swapchains, pipeline construction, memory management) than doing things with DirectX 11 or OpenGL / OpenGL ES but it appears to be the only way to be truly cross-platform and potentially have access to the biggest deployable markets out there.

    Lastly, I should emphasize one of the biggest motivations for doing this. Back in October 2020, I was trying my hand at learning Unreal and Unity to develop simple projects primarily because (a) I thought graphics pipelines had become complex enough and a lot of complex rendering techniques standard enough (e.g. Global Illumination, PBR, Physics etc) that letting a 3rd party engine take care of those would the way to go and, (b) of course, a big part of the reason to use these engines is avoiding the pain of cross-platform development yourself.

    But I soon discovered that I’d need to re-learn the rendering and game physics concepts myself anyway, and learning a new toolset (which might be perhaps inflexible for my needs, say Blueprints) would be an additional burden. Not to mention the bloated project sizes in these engines for even the simplest of concepts and the need to deal with engine-level bugs and inconsistencies (I’m looking at you, Unity!). So why not start from scratch and make things as simple as I can for myself (and possibly, for others?).

    I cannot over-emphasize the simplicity aspect of this endeavor. If I want to be able to create a game, I want it to be a simple process and not having to fight with the engine while still providing full transparency into how things are working under the hood. I will not be going for a massive set of capabilities to be provided by the engine but would rather provide an architecture that is easily extensible should one need to do so while providing a solid core set of primitives, materials, shaders, texture-handling, lighting, cameras, scenes, collisions etc.

    And, since I’ve last touched graphics programming, things have become simpler in many ways with the standardization of certain material and lighting techniques (e.g., PBR) and file formats (e.g., GLTF2). Moreover, I’ve found some pretty good bindings with a very active developer team behind them with Silk.NET. So combining this with a GLTF2 loader (SharpGLTF), I’ve been enjoying myself building out a small game-engine (and have already implemented PBR, GLTF2 loading, scene hierarchies, lighting, render-targets etc).

    I won’t be going for building an editor for the game-engine either. The scripting will most likely be based on C# (which won’t be really scripting given the engine is likely to be small – so it’d make sense to include game logic as part of the compiled package – but we’ll see). The other aspect is the content pipeline for which I’d be squarely going for Blender as my main “level-editor” and the gltf2 export pipeline being the way to feed scene hierarchies to the engine (with game logic residing in C# code).

    As 2021 begins, I hope to spend a decent amount of time working on this and see where it goes (making things cross-platform would be one of the key challenges).

    I’ll make the source available on GitHub (under the MIT license most likely) once it reaches a certain level of maturity and would welcome contributors at that stage. And I’ll keep posting my about my progress here intermittently.