Monday, October 15, 2012

The work of von Foerster: summary of a summary


The academic work of Heinz von Foerster was, and remains, highly influential in a number of disciplines, namely due to the pervasive implications of his distinction between first and second order cybernetics (and its antecedent ideas). Where first order cybernetics may be simply described as the study of feedback systems by observation, second-order cybernetics extends this observation of a system to incorporate the observer itself: it is reflexive in that the observation of the feedback system is itself a feedback system to be explained. While I am familiar with this concept, I am not particularly familiar with the body of work produced by von Foerster to instantiate this concept, although I have encountered numerous references to him, particularly when the subject is related to enactivism.

In 2003, Bernard Scott republished a summary of von Foersters' work which he originally published in 1979. The original paper was published just a few years after the official retirement of von Foerster, who apparently (as many an academic has before and since) continued his work for many subsequent years. It serves as a summary of the breadth of work and its contribution, and was republished partly in recognition of the continuing, and expanding, influence it exerts. This post is a very brief summary of this summary paper.

In general terms, von Foerster views on computation and cognition seem to be inherently integrated, holistic, proposing dynamic interactions between the micro, the macro and the global. This view thus contrasts with functional models of cognitive processes which in their nature, can only be static snapshots of the dynamic interactions at play: cf autopoietic theory that extends this notion with the principles of self-reconstitution and organisational closure. Particularly, he emphasises the necessity of considering perception, cognition and memory as indivisible aspects of a complete integrated cognitive system, cf enactivism.

With this consideration as a consistent thread, four primary phases in the development of von Foersters' research are identified. Firstly is his consideration of large molecules as the basis for biological computation, rather than the prevailing focus on neural networks, and that 'forms' of computation underlie all computational systems. Secondly is the exploration of self-organisation, and the reconciliation of organisation with the potential paradox of self-reference. In this sense, a system that increases in order (organisation) requires that its observer adapts its frame-of-reference to incorporate this: if this were not required of the observer, then the system can not be regarded as self-organising. The resulting infinite recursion provides an account of the conditions necessary for social communication and interaction: a consequence of the second order cybernetics. Thirdly is a focus on the nature of memory as being key to understanding cognition and consciousness. Returning to the notion of holistic cognition described above, this is in contrast to the perspective of memory as a static storage mechanism which was prevalent among behaviourist psychologists, and still remains prevalent in the work of designers of synthetic cognitive models and architectures (the countering of which is a key theme of my own research). The fourth and final identified phase (of the original 1979 paper that is) is the formalisation of the concept of self-referential systems and analysis as recursive computation, and the extension of this to apply also to the observer.

The threads of self reference and a holistic perspective have, as noted above, had a wide influence, and continue to do so. I did not realise before that Maturana and Varela's well known formulation of autopoiesis was done at the lab that von Foerster led (the Biological Computing Laboratory, University of Illinois). The relationship is of course clear now that I know about it (!): autopoiesis builds upon the self-reference and holism with self-reconstitution and organisational closure to form a fully reflexive theory. Similarly, enactivism seems to owe much to von Foersters' influence, with its integrated consideration of agent and environment, embodiment and cognition - a theme that has become increasing prevalent in recent years among those working on cognitive robotics with a more theoretical perspective - extending to the consideration of social behaviour. In all, the principle of second-order cybernetics and the theoretical perspectives upon which it is based remain important in the consideration of cognition and human behaviour despite its seemingly abstract theoretical nature, and Heinz von Foerster played a rather prominent role in providing its underpinnings.

Some of the 'buzzwords' raised in the summary of von Foersters' research which carry through as such today (among others - and I use the term buzzwords without any pejorative intent, merely as a 'note-to-self'):
- second order cybernetics
- self-organisation
- the holistic nature of cognition (developed as enactivism)

Paper reference:
Scott, B. (2003), "Heinz von Foerster - an appreciation (revisited)", Cybernetics and Human Knowing, 10(3/4), pp137-149

Thursday, July 12, 2012

Uncertainty in Science

Just came across an interesting article on Wired: Science today. Written by Stuart Firenstein, a biological scientist and active in the public understanding of science. It's on how uncertainty and doubt are actually good things, fundamental drivers of the scientific method; and not something to be brushed under the carpet or made out to indicate complete certainty or ignorance as it frequently is by politicians and activists on all sides of a politically charge argument, or jumped on by the media (e.g. the MMR jab fiasco a few years ago) .

Taking the hot topic (hehe...) of of global warming as an example, Firenstein notes that the lack of clear-cut, unambiguous answers isn't an indication that science cannot provide anything of utility in the debate, and should not be discarded as a result: " Revision is a victory in science, and that is precisely what makes it so powerful." . If science is the search for knowledge, then what is often overlooked is that newly acquired knowledge is a means for forming and framing new questions; each step is just that, and not a certain end in itself.

A little extract:
"We live in a complex world that depends on sophisticated scientific knowledge. That knowledge isn’t perfect and we must learn to abide by some ignorance and appreciate that while science is not complete it remains the single best method humans have ever devised for empirically understanding the way things work."
From http://www.wired.com/wiredscience/2012/07/firestein-science-doubt/

Thursday, June 28, 2012

I'm a good Scientist


I must be a Good Scientist, as I make lots of mistakes...



[Image attribution: I honestly can't remember where I got this from]

Sunday, January 15, 2012

Book: "Dreaming in Code"

Cover of "Dreaming in Code",
by Scott Rosenberg
As I mentioned previously, I have been reading a book on programming called "Dreaming in Code" (by S. Rosenberg, full ref may be found below). Seeing as I've just finished it now, I thought I would share a few brief thoughts on it. In all, I enjoyed reading it. It wasn't what I was expecting, and I read it relatively slowly, over the past month, reading a half-to-full chapter at a time. In all, whilst it is about programming, with a particular emphasis on large projects/programmes, it covers history, basic software engineering concepts, and the story of a single open-source development project in a mixed-up sort of way, but a way which still managed to keep my attention (i.e. I didn't have to expend too much effort in picking the book back up after leaving it for a few days).

In going through the development (evolution is perhaps a better word) of an open-source software project - a PIM called Chandler - a number of general software development issues are come across and discussed in the context of historical software engineering developments. This is what I particularly liked about it - the central story of a particular software development project, with the deeper consideration and historical context for some of the central features of its ongoing work, both technical, and personal. And this is what seems to persistently emerge as a fundamental confounding factor of software development: the computer is as precise as could be wished for, but those telling it what to do are decidedly not so (an observation so familiar to anyone who has written, or attempted to write, any piece of code).

Having said this, unlike some other reviews, I found the chapter dedicated to methods (chapter 9) to be a bit too contrived and preachy - particularly given the easy-going style of the rest of the book. This isn't to say the subject matter is out of place: it goes through . I think it's no coincidence that this chapter seems to depart furthest from the Chandler story narrative, a form that suited the rest of the book so well. I suppose it depends on your perspective: I enjoyed the Chandler story, with it providing the context for delves into general software engineering principles and history, so when this context fell away in chapter 9, I was left missing it.

So, would I recommend reading this? I think it depends (which is a cop-out of course...). I'm technically minded, I write code regularly, but I'm not a software engineer, and I've had no real prior knowledge of how big software projects are run. As such, the book provides a little glimpse of this. I think that while it would be too much to say that this book provides a full coverage of software development techniques by means of little examples, it does provide enough that I learned something from it. If, however, you have already been embedded within a formal software development setting, then I don't think you will gain anything new by reading this - apart from confirmation that software development (closed-, open- source of otherwise) is hard work, with many pitfalls, and having more money to do something actually seems to make it even more difficult. This of course could be reason enough though! For me though, I enjoyed reading the book, for the little asides to the main story as much as anything. However, what is obviously minimally required is some form of interest in the way computers are cajoled to do our (collective) bidding.

It seems as though since the book was written, Chandler has fallen to the very criticisms that the book makes of closed-source, commercial software developments: closed development, lack of user testing, and changing specifications (among others). However, despite the departure of the books seemingly main protagonist, Mitch Kapor, Chandler does seem to still be going, with the 1.0 release the focus of so much discussion in the book, but with not too much activity (e.g. planning pages on project wiki last updated in 2008, and project blog not updated since 2009...).

Scott Rosenberg (2007), "Dreaming in Code", Crown Publishing, ISBN: 978-1400082469. Website accompanying the book is here.

Thursday, January 12, 2012

World's smallest frog discovered

The story on the BBC News website that the world's smallest frog has been discovered. Really? Wow!

Smallest frog? Picture from the BBC News story.
/rant

No!! Have they taken a measurement and not bothered to look up the standard measurements of other frogs? Are they now certain that there are no more species of frog that could ever be found? Perhaps even they have discovered some theoretical reason why a frog could never possibly be smaller? Maybe I'm being overly pedantic, but surely it should be (along the lines of) "a new species of frog has been discovered, which is now the smallest known species of frog". I doubt such a claim would be in a peer reviewed paper, so maybe this is just sloppy wording to maximise 'impact' on the website...

/endrant

Saturday, January 07, 2012

CFP: AAAI'12 special track on Cognitive Systems

At the 26th AAAI conference this year (in Toronto, Canada) there will be a special track on Cognitive Systems:
In an attempt to return to the original goals of artificial intelligence and cognitive science, this special track invites papers in human-level intelligence, integrated intelligent systems, cognitive architectures, situated embodied cognition, and related areas that aim to explain intelligence in computational terms and reproduce a range of human cognitive abilities in computational artifacts. The track will focus on various cognitive capabilities in the context of artificial cognitive systems, including the following, as well as other related integrated cognitive functions.
There are a number of other special tracks too, in addition to the main technical programme, among which are the intriguingly titled "computational sustainability and artificial intelligence", and more conventional "robotics". In any case, the deadline for submissions is not far off: January 20th 2012.

Friday, January 06, 2012

MATLAB alternatives (free, of course...)

I don't use MATLAB very much at the moment, or even in recent years. Production of a couple of graphs perhaps, maybe a little data processing, but nothing that can't really be done using any other piece of software available to me (MS Excel, LO Calc, or a bit of my own code). I have used MATLAB extensively in the past, mostly Simulink though, for some signal processing and control stuff, so know it can be a very useful tool. However, given my current MATLAB inactivity, and that where I work doesn't have an endless supply of licenses for research (and these are fairly expensive as you may have noticed...), it seems that my occasional use can be easily covered by the open-source/freeware.

The main options seem to be Octave and SciLab. Both are open-source and free, and can be used in Linux, Windows or MacOS (I like cross-platform compatibility...). As open-source 'versions' of MATLAB, both of have at least some degree of compatibility with it. Although there seems to be some disagreement on which is closest to full MATLAB compatibility, from what I've read while only briefly looking a little further into it, Octave tries to maintain the same syntax (so aiming for m-file compatibility, and treating any deviations from this as bugs), whereas SciLab offers a tool for conversion. I think that a similarity to MATLAB in terms of functionality and syntax is beneficial because of its pervasive nature, and the subsequent opportunity to share resources.

In terms of a comparison between these two in a more computationally technical sense (comparison of calculations, plotting and syntax with MATLAB), see this tech report, which also mentions another possible candidate too - FreeMat. This report concludes that Octave does actually offer a viable alternative to MATLAB, with limitations found in both SciLab and FreeMat.

Why did I write a post on this? Mostly just because I want a record of these links for myself, but partly because hopefully someone else will find this useful too.