Skip to content

Haskell and Music: Expressing Notes, Signals, and Symphonies in Haskell

Paul Hudak has written a new work on using Haskell for musical expression, entitled The Haskell School of Music — From Signals to Symphonies (Version 2.0).

Too often, titles on computer programming end up being dry publications that merely describe the syntax and semantics of a programming language without explaining any inspirational ideas for it. However, Hudak is able to motivate the study of Haskell with concepts from other fields that focus on the school of expression unique to the language.

Similarly to Hudak’s other work, The Haskell School of Expression — Learning Functional Programming through Multimedia [1] (nicknamed in the Haskell community as “SoE“), this book uses concepts from another field, in this case, music, to motivate the exploration of Haskell.  Whereas SoE uses concepts from multimedia, SoM (a possible nickname for this new work) uses concepts from music.

The new work is written in an inspirational manner.  On page xiv of the preface, Hudak describes the background for writing the book as follows:

I have also had a long, informal, yet passionate interest in music, being an amateur jazz pianist and having played in several bands over the years….

This current book is a rewrite of The Haskell School of Expression with a focus on computer music, based on, and greatly improving upon, the ideas in Haskore and HasSound.

Hudak used to play as a pianist in a jazz group based in New Haven, Connecticut, known as “Collectively Speaking.”  I remember having attended one of their performances in New Haven in circa 1995 (Hudak was generous enough to greet me after the performance).

One of the interesting aspects of this work is that it is written from the perspective of appreciating music as an artistic medium for expression in Haskell.  On page 1 of chapter 1, Hudak writes:

Computers are everywhere. And so is music! Although some might think of the two as being at best distant relatives, in fact they share many deep properties. Music comes from the soul, and is inspired by the heart, yet it has the mathematical rigor of computers. Computers have mathematical rigor of course, yet the most creative ideas in mathematics and computer science come from the soul, just like music.

The cover image of the book is one of “Euterpe, the Greek Muse of Music” (page i).   Hudak describes the image on page 2 of chapter 1 as follows:

The core musical ideas are collected into a Haskell library called Euterpea. The name “Euterpea” is derived from “Euterpe,” who was one of the nine Greek muses, or goddesses of the arts, specifically the muse of music. A hypothetical picture of Euterpe graces the cover of this textbook.

In reading the above paragraph, I was somehow reminded of another book that, somewhat similarly, discusses the muse in the computer.  In the work The Muse in the Machine: Computerizing the Poetry of Human Thought [2], David Gelernter, also a professor of computer science at Yale University, discusses the roles of emotion and spirituality in cognitive science.  On pages 1-2 of chapter 1 of the work, Gelernter writes as follows:

It’s hard to conceive offhand of a less promising consumer innovation than a computer that comes factory equipped with “emotions”–but here’s a candidate: how about a “spiritual” computer? The spiritual computer spends its time pondering the mysteries of the universe, occasionally printing cryptic messages on its screen and otherwise ignoring the user altogether….

Who needs this kind of nonsense from a computer? Science does; in a broader sense we all do, because adding “emotions” to computers is the key to the biggest unsolved intellectual puzzle of our time: how thinking works.

However, whereas Gelernter’s outlook was based on poetry, Hudak’s is based on music.  Nevertheless, the two works do share a common thread in being concerned with a form of art in relation to computers.  On Chapter 1, page 4, Hudak writes:

In the area of computer music, there is another reason why clarity is important: namely, that the code often represents the author’s thought process, musical intent, and artistic choices. A conventional musical score does not say much about what the composer thought as she wrote the music, but a program often does. So when you write your programs, write them for others to see, and aim for elegance and beauty, just like the musical result that you desire.

The notion of the importance of art as a form of expression with respect to computers, although often ignored, is not new.  A few years ago, Hudak helped to create a Computing and the Arts major at Yale.  Prior to that, in 1997, a professor of cognitive science at Indiana University at Bloomington, Douglas R. Hofstadter, published a book discussing the problems of translating poetry and the implications of such issues for artificial intelligence, entitled Le Ton beau de Marot: In Praise of the Music of Language [3].

However, Hudak’s current work is unique in focusing on music as a mode of expression for functional programming in Haskell.  With regard to this topic, Hudak writes on page 2 of chapter 1 as follows:

A dimension that proves particularly useful with respect to a programming language is one that separates high-level musical concerns from low-level musical concerns. Since a “high-level” programming language—namely Haskell—is used to program at both of these musical levels, to avoid confusion the terms note level and signal level will be used in the musical dimension.

From what I have read so far (I have just started reading the book), Hudak apparently plans to use functional abstraction to distinguish between musical concerns at different levels.

On a personal note, I have always been interested in how to use computers as a medium of expression for emotion.  I occasionally write English haiku poetry in my spare time.  Many years ago, in circa 1982, I attended a computer show at the Shinjuku Sankaku (Triangle) Building in Shinjuku, Tokyo, where I had the opportunity to witness an Apple ][ j-plus computer playing music using a musical organ-like keyboard.  When notes were played on the keyboard, colored bars would appear on a nearby monitor, and the bars would increase or decrease in size depending on how long the keys were pressed.  I used to play the violin in elementary school, and for some reason, remember being mesmerized as I watched the colored bars move up and down in tune with the music played on the keyboard.  This may have been one reason that I have continually been interested in various forms of multimedia as a mode of expression using computers.

Sadly, during my college years (1989-1994, including a 1-year LOA), I did not have much opportunity to explore the role of multimedia in expression.

Art is useful for training the mind to appreciate beauty in computer science.  While the syntax and semantics of Haskell are based on the beauty of concise functional expressions, beauty in writing an elegant computer program can be cultivated by exposure to haiku poetry, and beauty in musical expression can be cultivated by appreciating classical and jazz music.  Greater exposure to beauty in art can help to cultivate a richer sense of beauty in computer science, which can, in turn, help in creating more elegant and innovative ideas.

More importantly, art can be a great motivator.  Art is what takes the doldrums out of a mundane existence.  Without art, computer science becomes a mere skill.  Art lies at the base of innovation, which lies at the base of ingenuity.  Ultimately, we are all artists in some form.

[1] Hudak, Paul. The Haskell School of Expression — Learning Functional Programming through Multimedia. New York: Cambridge University Press, 2000. Print.

[2] Gelernter, David Hillel. The Muse in the Machine: Computerizing the Poetry of Human Thought. New York: Simon & Schuster, 1994. Print.

[3] Hofstadter, Douglas R. Le Ton beau de Marot: In Praise of the Music of Language. New York: Basic Books, 1997. Print.


Functional Objects and Object-oriented Functions: Purity vs. Impurity in Pursuing Programming Paradigms

In order to write better code, it is probably best not to restrict oneself to a single programming paradigm.

There are some people who believe that functional programming and message-passing-based programming are mutually exclusive, but I do not believe so.

A few years ago, I read a slide-show presentation by Matthias Felleisen of the PLT research group at Northeastern University, entitled “Functional Objects,” presented at the European Conference on Object-Oriented Programming Languages, at Oslo, Norway, in 2004.

In slide 62/74 of the presentation, Felleisen quotes a portion of an e-mail message from Guy [L. Steele, Jr., according to a comment below by Ram], as follows:

Guy in email to me, cc’ed Gerry [Sussman?] on April 2, 2004:

“We decided to construct a toy implementation of an actor language so that we could play with it …

Then came a crucial discovery. Once we got the interpreter working correctly and had played with it for a while, writing small actors programs, we were astonished to discover that that the program fragments in _apply_ that implemented function application and actor invocation were identical!”

What is interesting is that the actor model formed a basis for such message-passing programming languages as Smalltalk. According to the presentation, the implementations of these portions of the toy actor language were identical to the corresponding portions of function application that were also implemented.

This implies that there are common ideas present between functional programming and message-passing-based programming.

In the paper “Teaching Programming Languages in a Post-Linnaean Age,” by Shriram Krishnamurthi of Brown University, presented at the 2008 SIGPLAN Workshop on Programming Language Curriculum, on May 29 to 30, 2008, at Cambridge, MA, USA, Krishnamurthi adamantly opposes the classification of programming languages into paradigms. He writes (page 1, section 1, third paragraph):

Most books rigorously adhere to the sacred division of languages into “functional”,  “imperative”, “object-oriented”, and “logic” camps. I conjecture that this desire for taxonomy is an artifact of our science-envy from the early days of our discipline: a misguided attempt to follow the practice of science rather than its spirit.

Instead, he believes that programming languages should be considered as “aggregations of features” (page 2, section 4, first paragraph):

If languages are not defined by taxonomies, how are they constructed? They are aggregations of features. Rather than study extant languages as a whole, which conflates the essential with the accidental, it is more instructive to decompose them into constituent features, which in turn can be studied individually. The student then has a toolkit of features that they can re-compose per their needs.

However, there are also those who insist on purity within a specific paradigm. For example, in a post on the Haskell-Cafe Mailing List entitled “a regressive view of support for imperative programming in Haskell,” by Paul Hudak, dated “Wed Aug 8 14:20:39 EDT 2007,” Hudak writes:

All of the recent talk of support for imperative programming in Haskell makes me really nervous. To be honest, I’ve always been a bit uncomfortable even with monad syntax. Instead of:

[Code highlighting is that of the writer of this article.]
do x <- cmd1
     y <- cmd2
     return e

I was always perfectly happy with:

cmd1 >>= \x->
cmd2 >>= \y->
return e

Functions are in my comfort zone; syntax that hides them takes me out of my comfort zone.

In my opinion one of the key principles in the design of Haskell has been the insistence on purity. It is arguably what led the Haskell designers to “discover” the monadic solution to IO, and is more generally what inspired many researchers to “discover” purely functional solutions to many seemingly imperative problems. With references and mutable data structures and IO and who-knows-what-else to support the Imperative Way, this discovery process becomes stunted.

In other words, Hudak believes that insisting upon purity leads to “purely functional solutions to many seemingly imperative problems.”

My own view is that the choice of which tools to use should be dictated by the most natural form of expression for the problem. For example, Haskell is designed so that side effects are eschewed; to preserve referential transparency, any mutative behavior must be placed within a monad. Preserving referential transparency can help to prevent bugs by facilitating writing programs for which correctness can relatively easily be proven. In other words, in Haskell, unlike in imperative languages, in many cases, if a program compiles, the program is likely to be correct. Such behavior can be very useful especially for writing functions that are naturally expressed as mathematical functions for which correctness can relatively easily be proven.

However, the static typing system of Haskell precludes true reflectivity. What this means is that the programming language does not fully support all possible modifications to an implementation of the programming language at runtime. In certain cases, it may be necessary to restart a given program in order to reflect changes made to the program if those changes require changes to types.

This may not be the optimal solution for certain types of problems. For example, one pet project that I dream of accomplishing someday is to create a virtual world, modeled on Earth, that supports evolution of the world itself. In other words, I would like to design a virtual world such that in the world, for instance, an entity of type Monkey can evolve into another entity of type HumanBeing. Furthermore, the virtual world itself should be able to evolve: For instance, it should be possible for the world to evolve from, say, being of type AquaticPlanet to type TerrestrialPlanet. Type AquaticPlanet would be more suited to the evolution of aquatic creatures, such as fish, whales, and dolphins, whereas type TerrestrialPlanet would be better for that of terrestrial creatures, such as elephants, monkeys, and human beings.

However, the model for this virtual world is essentially one of a vast collection of entities exchanging messages (food, by-products, oxygen, carbon dioxide, etc.) among one another. Furthermore, this world can change its own type at runtime. It may be more natural to express this world using a dynamically-typed reflective message-passing programming language, such as Smalltalk, than using a statically-typed referentially transparent purely functional programming language, such as Haskell.

Hence, my conclusion is that the most natural mode of expression for the problem probably should dictate the choice of tools for the solution. Problems that are most naturally expressed mathematically are probably best solved using a referentially transparent functional programming language. Problems that are most naturally expressed as a vast collection of entities exchanging messages among one another while continually changing their own types are probably best solved using a reflective message-passing-based programming language. Problems that lie somewhere in between can probably be solved using a more general-purpose language that is beautiful, elegant, concise, and supports multiple paradigms.

In other words, there is no panacea among programming languages. It is simply not possible to create a single language that is the optimal solution to every possible programming problem. The most natural expression of a problem should dictate the optimal tools for solving the problem.

Alice’s Adventures in Programming Language Theory Wonderland: A Short List of Computer Science/Programming/Mathematics Books in the Style of Lewis Carroll

Once upon a time, an MIT professor reputedly claimed that Alice’s Adventures in Wonderland was the best book on computer science.

While that opinion is subjective, there are a number of papers or publications related to computer science, programming, or mathematics that are written in a style that at least loosely resembles Carroll’s work.

In fact, one of the first documents that I encountered in this classification was a paper that I encountered in the Sterling Memorial Library of my college that was an academic treatise analyzing the role of logic in the humor of works by Lewis Carroll. The paper was actually written to fulfill an academic requirement (I think that it was written as a thesis for either a Master of Arts of Master of Science degree, probably in philosophy), but was actually great fun to read.

Ever since encountering that paper, I have subconsciously been searching for similar papers and publications that were associated, in some manner, with the writing style of Lewis Carroll.

One of the first books that I encountered that was written in that style was Compared to What?: An Introduction to the Anaylsis of Algorithms, by Gregory J. E. Rawlins (for some reason, my comment on the book at has apparently been borrowed, without my permission, by Google as their official review without any credit; I am not sure whether to laugh or to be annoyed). This book included numerous quotations and illustrations from works by Carroll, and, unlike many other textbooks on the design and analysis of algorithms, was written from the perspective that every problem at some point did not have a solution, and described the process that originally led up to the described algorithm.

Compared to What?: An Introduction to the Anaylsis of Algorithms

I enjoyed this book enough to wrap it in kaleidoscopic wrapping paper, and kept it in my stock until eventually moving from New York to Tokyo (one day, I hope to acquire another new copy).

Another fun-to-read book that I encountered in this style was a book on the Scheme programming language, The Little Schemer, by Daniel P. Friedman and Matthias Felleisen.

The Little Schemer, 4th Edition

The Little Schemer is actually an updated version of a previous work, The Little LISPer, also by Daniel P. Friedman and Matthias Felleisen, which had focused on recursion in Lisp. Scheme is an alternative dialect with a cleaner syntax and such changes as hygienic macros and additional features as first-class continuations.

The Little LISPer

The cover has a colorful illustration of an elephant playing with toys, and is written is a dialog style using food as a theme to teach how to think recursively with Scheme as a tool. Even the chapters within have been given such titles as “Toys,” “Cons the Magnificent,” and “Lambda the Ultimate.” At the end of the chapters are blank pages reserved for “jelly stains.”

I enjoyed this book sufficiently that I even took it with me occasionally to Kinko’s while living in Manhattan to read in order to keep the doldrums away from my studies of relational databases.

Although I was much more interested in Scheme than in Java, another related book that I also enjoyed was A Little Java, A Few Patterns, also by Matthias Felleisen and Daniel P. Friedman, which was written in a similar style. The cover has a colorful illustration of an elephant apparently serving as a waiter for two customers in a restaurant: a teacup and a Bactrian (double-humped) camel. Unlike other books on Java, this title also used that language as a tool to teach how to think recursively. The content of the work almost made Java feel like an alternative version of Scheme.

A Little Java, A Few Patterns

The Little Schemer has a sequel, The Seasoned Schemer, also by Daniel P. Friedman and Matthias Felleisen. The cover has a colorful illustration of an elephant sitting in the pilot’s seat of a biplane. This book takes off from where the prequel left off, and reportedly discusses such topics as memoization and my favorite, continuations, in the same style.

A logic programming counterpart to The Little Schemer is The Reasoned Schemer, by Daniel P. Friedman, William E. Byrd and Oleg Kiselyov. The cover has a colorful illustration of an elephant dressed in a jacket smoking a cigar as he comfortably sits in a large chair at a table on a moving train. The book discusses logic programming as a natural extension of functional programming, and assumes familiarity with the chapters up to “Lambda the Ultimate” in The Little Schemer.

The Reasoned Schemer

Of course, no account of books in this series would be complete without a reference to the work on functional programming, The Little MLer, again by Matthias Felleisen and Daniel P. Friedman. The cover has a colorful illustration of three elephants dressed as diplomats signing a treaty with three camels in front of a flag with the acronym “SML,” for “Standard ML.”

(One possible caveat: According to douglasslnc’s review, “The book does take a dramatic turn in complexity around page 81, just short of the halfway point….” I later confirmed this statement myself with a copy at a bookstore. Be careful not to wear your concentration out before you reach page 81.)

The Little MLer

Although not specifically concerned with computer science or programming, another book in the same vein is Flatland: A Romance of Many Dimensions, by Edwin Abbott Abbott. This book is especially interesting because it also doubles as a social critique of the society of Victorian England in 1884.

Flatland: A Romance of Many Dimensions

Although published 127 years ago, Flatland is worth special mention because it poignantly illustrates how a story can add significant value to a book on an otherwise potentially dry subject such as mathematics without detracting from the educational value. This work was published in 1884 in Victorian England, in a land at a time when class distinctions created social barriers between different people. Abbott draws an analogy to class distinctions by the number of angles of a polygon; the more angles, the higher the social class of the polygon within its (two-dimensional) society.

Although initially, the number of polygons is set at creation of the polygon, through intensive training, it is sometimes possible for a polygon to increase the number of its angles (similarly to how poor people occasionally become richer people in modern society). The polygon of highest social class is the circle: It has an infinite number of angles. Polygons that are almost indistinguishable from circles because they have enough angles to be, at first glance, indistinguishable from their infinite-angled brethren wind up not belonging anywhere; not being true circles, they are unwelcome among them, yet they have too many angles and are envied by their lower-angle-numbered friends. Many of them wind up becoming revolutionaries.

Another reason for recommending this story is its usefulness in motivating study of mathematics for students with mathematics phobia. As I explained in my previous post, “An Idea on A Language Version of Project Euler: Programming for Poets,” prior to matriculation, I had experienced mathematics phobia. Back then, I used to roam through the Co-op Bookstore of my college, searching for a book to tie together English literature, in which, as a later aspiring poet, I felt more comfortable, and mathematics; this was one key title that I chanced upon. I found Abbott’s approach of tying together an introduction to mathematics with a social critique of Victorian England somewhat quaint, yet also quite fascinating. This book proved instrumental in convincing me that poets (and even aspiring poets) need not fear mathematics or find it boring; there was a different kind of beauty in mathematics from that in English poetry.

Another title based on Abbott’s work that I later also found fascinating was Flatterland: Like Flatland, Only More So, by Ian Stewart.

Flatterland: Like Flatland, Only More So

Based on Flatland, Flatterland is the story of Victoria Line and her adventures in non-Euclidean space. One day, Victoria chances upon the diary of her great-great-grandfather, A. Square (the protagonist in Flatland), hidden in the attic. This encounter prompts her to invite a sphere from Spaceland for a visit (see the Wikipedia entry), but instead, she is visited by the Space Hopper, who then guides her through ten dimensions. Topics that Victoria meets upon in her travels include “fractal geometry, black holes, cosmic strings and quantum theory” (see the associated entry).

Before I wear myself out with this post, please allow me to add a reference to an interactive online programming tutorial. As many of you probably already know, most tutorials are not truly interactive; even when they are online, they merely display text in a digital counterpart form of their dead-tree brethren. In addition, most online tutorials do not have any story. However, a few years ago, I managed to find one exception, “Lists And Lists: An Interactive Tutorial,” by Andrew Plotkin. The tutorial assumes the form of a text-based adventure game with programming exercises in Scheme (the genie in the tutorial seems to discuss the Lisp programming language, but the exercises are all in Scheme).

If anybody else has any additional recommendations for this list, please add them as comments to this post!

An Idea on A Language Version of Project Euler: Programming for Poets

Today, I chanced upon a Facebook comment by Brent Yorgey on another entry by another Facebook user on an online article in the online version of The Atlantic magazine describing how one person, James Somers, initially failed at learning programming, but later succeeded as a result of Project Euler (“How I Failed, Failed, and Finally Succeeded at Learning How to Code – James Somers – Technology – The Atlantic”).

(For those who aren’t aware, Project Euler is a site containing a series of mathematical/computer programming problems designed to be solved on a computer subject to the requirement that each problem must not take more than one minute to run.)

For some reason, I was unable to find the comment later on Yorgey’s Facebook profile (probably because his entry was a comment posted to another user, instead of a post). Therefore, I created my own Facebook post citing the original article, as follows:

It would be nice to have a non-mathematical version of this site for poets. One type of project that I would be especially interested in would be a site that used an online version of an Alice’s Adventures in Wonderland-style interactive dialog, a la _The Little Schemer,_ to teach how to think recursively. This could be expanded to teaching how to think functionally, then how to think in message-passing terms, etc.

Then I remembered a description that I had read in the late 1980’s, circa 1987, in a computer magazine (probably Byte) on a program for creating English sentences based on simple grammar rules and a lexicon, and added a (rather lengthy) comment to my post:

More specifically, one idea would be to create a site with challenges for manipulating words to form sentences of various types. I remember having once read a description of a program that used a set of simple grammatical rules combined wi…th a database of words to create sentences, some of which were very funny. For example, English has an SVO (Subject, Verb, Object) sentence order. Using such a setup, one might create a program that wrote nonsense sentences such as, “John waves music.” Adding adjectives could then create “John waves green music.” Adding adverbs could then create “John waves green music hungrily.” Adding subordinate clauses could then create “John waves green music hungrily because Jill eats hungry music sleepily.” Such nonsense sentences could form a basic for, say, a fun logic programming assignment. Then creating two objects to converse with each other using such sentences could form a basis for a message-passing assignment. There is no reason that sentences could not substitute for functions in a fun programming challenge.

If I were to create such a site, I would give challenges such as the following:

“John Aldebaran and Jill Betelgeuse are aliens from another planet. They learn by example, and want to study human communication by reading English sentences. Create a program that outputs simple sentences, using the following simple rule:

Sentence = Subject + Predicate,
where Predicate = Verb + Object

Try to create a program that models realistic human communication. The better your program, the more John and Jill will learn.”

The benefit of this type of assignment is that it also appeals to poets who feel frightened by mathematics, because on the surface, it does not look like mathematics at all. A challenge is to create assignments that teach structured thinking to mathematically-frightened poets without causing them to realize that they are actually learning a different form of mathematics.

The problem with a mathematical programming site such as Project Euler is that it doesn’t attract most poets. I used to study English poetry in college, and wrote a number of English haiku that my writing professor, Steven Davis, enjoyed enough to print out and post on his wall at home (he actually showed the wall to me). He even walked with me once to the Sterling Memorial Library at my college to show me the microfiche devices, then used to browse profiles of graduate programs, in order to recommend that I apply to a graduate program in English. While I do not claim to be a poet, I have always been interested in writing poetry, and consider myself a poet-aspirant.

From my experience in speaking with other students in the English Department at my college and, later, with my English students at Berlitz (where I used to work as an English teacher before becoming a translator), the majority of English language students that I have encountered do not seem to enjoy studying mathematics in general. The problem is that most of them are Myers-Briggs “Feeler” types (as opposed to “Thinker” types), who tend to seek emotional meaning in everything, and it is difficult for them to find emotional expression per se in most mathematics. As a result, most of them seem to consider mathematics, and any branch of study that does not involve emotional expression, in general, to be a “dry” field. (Prior to matriculation, I myself had mathematics phobia, but thanks to Andrew Barnes, an extremely enthusiastic then-student of mathematics, I was able to conquer this phobia as a result of a self-paced study focusing on writing proofs and thinking theoretically.)

However, there should be a way to motivate such students to study programming. One possible solution is to use language, instead of mathematics, as a tool.

In fact, the idea of using a language model for a programming challenge is not new. In another comment on Facebook, I continued:

In fact, the idea of word play as a challenge in programming is nothing new; Douglas Hofstadter used many language-based examples in his work, _Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought_… (see Hofstadter even wrote a book on poetic word-play, _
Le Ton Beau De Marot: In Praise of The Music of Language_ (see

Personally, I find it a pity that most exercises on programming do not explore word-play; I once had a fellow student in a logic class explain that she disliked mathematics, but liked logic (even though, in fact, mathematics itself is a form of logic). Word-play can be considered a form of logic, too. Exploring word-play in programming could attract more poets to the field.

After further thought, I remembered that I had used to study elements of another language that seemed quite suited for modeling natural language: Prolog. For example, according to the site “Introduction to Prolog – The Mind Project,” modeling certain types of English sentences in Prolog can be quite straightforward:

Let’s tell the computer some facts about a family we know. We’ll put the Prolog statement after the English statement.

English Prolog
John is the father of Susan. father(john,susan).
John is the husband of Martha. husband(john,martha).
John eats pizza. eats(john,pizza).
Susan eats pizza. eats(susan,pizza).
Martha eats pizza. eats(martha,pizza).
Susan eats oranges. eats(susan,oranges).
John bought pizza for Martha. bought(john,pizza,martha).
Susan is tired. tired(susan).

The problem with Prolog is that the language is almost a DSL (Domain-Specific Language) designed for modeling logic relationships. Inn fact, according to the Wikipedia entry for the language,

The name “Prolog” was chosen by Philippe Roussel as an abbreviation for programmation en logique (French for programming in logic).

While many consider Prolog to be, in fact, a general-purpose programming language, the language tends to be much harder to use than certain functional or pseudo-functional programming languages for modeling mathematical functions. For example, consider writing a function for computing the Fibonacci sequence:

A naive Haskell version (identical to the version at “The Fibonacci sequence – HaskellWiki,” except with an added type annotation):

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

A naive Scheme version (created by the author):

(define (fibonacci n)
  (cond ((= n 0) 0)
	((= n 1) 1)
	(else (+ (fibonacci (- n 2)) (fibonacci (- n 1))))))

A naive Prolog version (see “Prolog Guide – Metainterpreters”):

  N>1, N1 is N-1, N2 is N-2,
  fibo_td(N1,F1), fibo_td(N2,F2),
  F is F1+F2.

While this is a matter of personal taste, in terms of readability, I find the Haskell version to be the easiest to read, the Scheme version the second easiest, and the Prolog version the least easiest. Therefore, I do not find Prolog to have an advantage for readability for at least some kinds of mathematical functions.

However, Prolog does seem suited for modeling English sentences.

It may be an interesting exercise to create a version of Project Euler to substitute various kinds of English sentences or other language constructs for mathematical functions. Such a site could prove effective in attracting poets (and other poet-aspirants), as well as students with mathematics phobia in general, to programming.

Evolving Flying Fish, Running Birds, and Swimming Lizards: Resolving Paradigm Shifts

Approximately two weeks ago, on Wednesday, May 25, 2011, I posted the following question on the Haskell Beginners Mailing List:

Subject: question for advice on packages for a retro-style pocket money ledger program

For some time, I have been considering rewriting my original pocket money ledger program in Haskell, but am not sure as to which packages would be most useful. My original pocket money ledger program was first written in circa 1983 in N80-BASIC (a JIS-compatible ROM-based line BASIC) on an NEC PC-8001 mkII [1].

The main difficulty is that I would like to preserve the original look and feel of my original program while still rewriting it in Haskell. However, my original program ran in an age before any kind of Macintosh-style or Windows-style GUI had become common, and used basic built-in ROM-based graphics and sound commands to draw a double-line colored graphical border on a centered-colored-text screen, and saved files to a floppy disk drive (floppy disks were actually considered advanced for that age, since most personal computers then used a cassette tape drive for external file storage).

As a result, I am uncertain as to which packages would be useful for preserving the original look and feel of my program. At minimum, I would like my program to exhibit the following behavior:

1. Start out by drawing a lime-green-colored double two-pixel wide box surrounding the screen, with the title and author name displayed as separately-colored text centered in the screen.

2. If possible, play a suitable music file while on this screen. Also, display an option to mute this music, and if this music is once muted, save this option for future runs of this program, so that the program will start out with the music muted as the default option. Also, display an option to control the volume of this music, and if possible, also display a separate option to choose a different tune to play. The tunes should be selectable from a selectable, expandable menu.

3. If possible, include a suitable screen saver to be displayed automatically, separate from the GUI screen saver, upon a user-defined period in which there has been no user input. The screen saver should be configurable in a separate configuration menu.

4. Upon the user hitting the Return key, stop playing the music (if playing), and move to the main menu screen.

5. The options displayed in the menu screen should include, at minimum, the following:

a. Create a new ledger book.

b. Load a ledger book (preferably from floppy disk).

c. Add an entry to the ledger book.

d. Delete an entry from the ledger book.

e. Erase the ledger book.

f. Save the current ledger book (preferably to floppy disk).

g. Tabulate and display income and expenditures (leads to a separate
screen requesting a time period for tabulation). (The income and
expenditures should be displayed in detail both as numerical charts
and as colored graphs in a preferred form (pie chart, bar graph,

h. Set configuration options (leads to a separate screen for
configuring default background music, default volume, default
application screen saver, default theme, default fonts, default menu
behavior, default keyboard remappings, default screen height/width
ratio, etc.).

i. Exit the application.

5. For each option (except for option i), move to a separate sub-menu screen. When input/output and processing have been completed on the separate sub-menu screen, return to the main menu screen.

Most of the functionality of this program is related to some form of input and output: colored formatted combined graphical and textual title screen with background music, manual data entry by a user, saving files to a floppy disk, reading files off a floppy disk, deleting files from a floppy disk, configuring various default music/volume/screen saver/theme/font/menu behavior/keyboard remapping/screen height and width ratio options, and combined numerical and graphical representation of statistical ledger information. Most of the computation involves only simple arithmetic (mainly addition and subtraction, with perhaps some multiplication and/or division, and no matrix manipulation).

The main point of this program is that program usage should be interactive, and not require a separately entered file. The idea is that the program will play the role of an interactive personal ledger assistant.

However, I am not sure as to how to implement this program in Haskell easily. Because most of the program is mainly concerned with side effects, it is difficult to write it easily while preserving referential transparency. This type of program seems relatively straightforward to write in N80-BASIC (which is no longer available in the original version), but is relatively less trivial in a purely functional programming language such as Haskell. However, I am tired of spaghetti code, and although writing each line of code in N80-BASIC is trivial, managing control flow is not. It is very difficult to manage control flow without writing spaghetti code in a dialect of line BASIC. However, most of the associated graphics and sound commands are proprietary and implementation-dependent, and I am not sure how to rewrite that part of the functionality in an implementation-independent language without spending lots of time on API-related issues.

— Benjamin L. Russell

[1] _OLD-COMPUTERS.COM Museum ~ NEC PC 8001 MK 2_. NYI (New York Internet). n.d. Web. May 25, 2011. .

Unfortunately, so far, nobody has replied, and I have a nagging suspicion that nobody will. The problem is that my question concerns translating a program originally written in N80-BASIC into Haskell, a language usually used for entirely different purposes. The original program used a number of language-specific, platform-dependent features (such as superimposing colored line graphics around the borders of a textual screen, and reading from and saving to a floppy disk).

For some languages, such features translate into corresponding features in the target language. However, Haskell is a purely functional programming language; as such, it is designed to eschew side effects, while my original program focused almost entirely on side effects. Therefore, translating my solution actually involves a paradigm shift.

This is not the first time that I have encountered a paradigm shift. In an earlier entry on this blog, entitled “Paradigm Shift: Back to the Past, and No Small Talk About Smalltalk,” dated August 25, 2009, I had to deal with another paradigm shift, that one involving a transition from the functional paradigm of Haskell to the message-passing paradigm of Smalltalk (a problem with which I am still struggling).

The problem is that most programmers who feel comfortable in one paradigm do not seem very eager to deal with paradigm shifts. For example, I have not read many writings by hardcore C++ programmers who enjoy translating C++ systems programming code into referentially transparent Haskell code. Neither have I read many papers by Prolog programmers about translating concurrent thread-manipulation Erlang code into pure Prolog code. Similarly, I have not read many papers by Smalltalk programmers about translating referentially transparent implementations of arrows in Haskell into Smalltalk code for which proofs of correctness can be written.

Most fish don’t seem to enjoy flying. Most birds don’t seem to enjoy running. Most lizards don’t seem to enjoy swimming.

However, exceptions exist: Flying fish fly, ostriches run, and marine iguanas swim. Evolution is possible.

The same can hold true, with varying degrees of adaptation, for programming languages. It is possible to push the proverbial cart from its side across town without ever using its wheels.

I, for one, think that there is a novel dimension of exhilaration associated conquering a paradigm shift which is of a different quality from that achieved by solving a programming problem. I have encountered a number of paradigm shifts in my life: imperative BASIC to pseudo-functional Scheme, pseudo-functional Scheme to functional Haskell, and functional Haskell to message-passing Smalltalk. Such shifts resemble the cultural shocks that I encountered when first moving from California to Tokyo, then from Tokyo to New Haven, and finally from New York back to Tokyo again.

To draw an analogy:

N80-BASIC:Haskell:Smalltalk::Tokyo:New Haven:Palo Alto

For some reason, I have never felt truly comfortable in any programming language. When programming in N80-BASIC, I struggled with spaghetti code. When programming in Scheme, I felt frustrated at not knowing how to draw colored superimposed lines around colored text on a screen specifically designed to allow superposition of graphics onto text. When programming in Haskell, I felt frustrated at not knowing how to write reflective programs that did not compute any value and that dealt only with side effects. When programming in Smalltalk, I felt frustrated at not knowing how to write referentially transparent functions for computing arrows.

What I would really wish for is a programming language that could do ALL OF THE ABOVE.

For the time being, I would be willing to settle for an easy way to translate my original N80-BASIC personal checkbook program into something that is algorithmic, referentially transparent, reflective, cross-platform portable, equipped with a rich set of libraries, and easily makes use of platform-specific features: a combination of Scheme, Haskell, Smalltalk, Clojure, and N80-BASIC.

Essentially, I need the artificial, programming language counterpart to a natural duck-billed platypus. Any ideas?

Paul Gauguin Couldn’t Paint, But He Still Became A Great Painter

Today, I discovered a blog post, “Soloists vs. The Choir,” by Andy Leonard on a blog entry by Joel Spolsky, founder of Fog Creek Software, on the correlation (or lack of, rather) between spent time and resulting quality of programming. Leonard wrote:

Is there really that great a difference between good and great programmers?

Joel cites some impressive statistics gleaned from Yale Professor Stanley Eisenstat, who teaches a software development class. The scatter plot says it all.

As Joel notes, “There’s just nothing to see here, and that’s the point. The quality of the work and the amount of time spent are simply uncorrelated.”

While that may be so, companies do not determine what people want to do; people do. This kind of reasoning leads to the conclusion that only star programmers should program. By that kind of reasoning, only great writers should write, only great translators should translate, and only great painters should paint. Anomalies such as Paul Gauguin, who became a famous painter despite lacking any aptitude for painting, are simply ignored.

The problem with this kind of reasoning is that it equates people with their skills, but completely ignores their interests. Not all people are exceptionally talented at what they are interested in, nor are all people necessarily interested in what they are talented in; some people are relatively talented in a subject that is not profitable to making into a living (such as painting or poetry).

However, by this kind of reasoning, what are painters and poets, for instance, supposed to do? Painting and poetry are not means that are profitable enough usually to earn a living; however, if people are only supposed to do something at which they are exceptionally talented, then people whose abilities lie in unprofitable industries such as painting or poetry should just starve to death, since only skills matter, and these skills are unprofitable.

This is the kind of reasoning that causes certain professors (Professor Stanley Eisenstat being one example with which I am personally familiar, since I took a class (CS 323a: “Introduction to Systems Programming,” fall 1993) under him) to try to “weed out” students who aren’t exceptionally gifted in programming. Granted, he wasn’t alone; Alan Perlis (also a Yale computer “science” professor) was reportedly much, much more severe, and reportedly once gave a first graduate programming assignment to solve five non-trivial programming assignments, including writing an artificial intelligence program to solve the “Eight Queens Puzzle,” in five different languages each, all in one week, only deliberately to announce that it was a joke when the assignment was due, and that one solution to one problem in one language was sufficient. On a scale of course difficulty level from 0 to 100, with higher numbers denoting greater difficulty, with Perlis at 100, I would probably place Eisenstat at about 15, especially for his class in fall 1993 (he once mentioned in his systems programming class that one of his later relatively difficult assignments, his infamous “encode-decode” assignment, used to be his second assignment). However, among the professors under whom I took courses in college in computer “science” (I put “science” in quotes because computer “science” is not really a science at all, but a procedural epistemology), Eisenstat ranked among the toughest taskmasters.

Well, should only the most gifted be allowed to pursue their interests? This solution only works if all people have at least one area in which they are especially gifted (people with well-rounded but average-level abilities may not), if all work areas are equally profitable, and if all people are interested in what they are gifted at. However, this is not true.

Let’s consider to where this style of reasoning leads. Assume that there exists a society, Utopia, where only people with exceptional skills are allowed to work in their areas of special ability. I.e., only great programmers are allowed to program, only great translators are allowed to translate, only great painters are allowed to paint, and so on. All others are strongly discouraged from working. What happens?

Well, most artifacts become works of art. Mostly great programs are written, mostly great translations are translated, mostly great paintings are painted, and so forth. I say “mostly,” not “only,” because in practice, even great workers occasionally produce poor work. Even a genius has an occasional dog day.

So far, so good, it seems. What else? Well, eventually all schoolchildren are classified, while still young, into classes corresponding to their abilities. Their interests are simply ignored.

Since interests no longer matter, anything unrelated to skills is also strongly discouraged. Anime is outlawed. Chocolate is outlawed. Games, except for certain puzzle and mathematical games, are outlawed. Movies are outlawed. None of these are essential to increasing productivity directly, are they? The big companies decide that we don’t need them, so they lobby Congress to eliminate them. Congress wants the funds from the lobbyists, so laws are passed to outlaw them. The Second Prohibition begins.

Hey, the more productivity, the better, and the more focus on that productivity, the better, right? The big companies are still not satisfied with productivity. They need more productivity, more money. “Let’s make the people concentrate more on their work,” they say. Poetry is outlawed. Painting is outlawed. Music is outlawed. Unproductive entertainment in general is outlawed. After all, if it isn’t profitable, it isn’t important, right? Productivity and profit are all that matter, right?


WRONG. Something is missing here. What is missing? The value of personal interests. People do not usually become interested in something only as a result of being skilled/gifted at the subject. People usually become interested in something because that something is *fun*. Why? The reason is that they simply like it. I.e., they are interested in it. These interests occasionally lead to works of genius from certain people, but those results usually originate in some form of initial interest. Without initial interest, works of genius do not usually arise.

The point is that personal interests matter. Preferences matter. Likes and dislikes matter. Allowing a gifted person to produce gifted work is one thing; preventing less gifted people from even trying is quite another. While I do believe that gifted people should be encouraged to develop their skills, I do *not* believe that those who are less gifted should be discouraged. Some people develop skills late in life. Others come up with ways around problems. If one cannot program well in C++ in a software company, one might be able to program in Haskell, Scheme, or Smalltalk alone on a project uniquely suited to the particular programming language in an environment where most of the software components have already been designed by other programmers and one is paying one’s own salary from another source of income.

It is one thing to tell someone, “You are a great C++ programmer! You are encouraged to use your skills!” It is quite another thing to tell someone, “You really suck [excuse my French] at C++ programming! You definitely should not program in any language anywhere!” Excuse me? Any language anywhere? What if the person wants to work alone on a personal project using a language with built-in libraries uniquely suited to the language, and has a separate source of income? What if the person can’t write a program worth a dime in C++, but programs relatively decently in, say, Scheme, Haskell, or Smalltalk? I once met a first-order logic student who hated mathematics, but loved logic. I later met a different person who felt comfortable at programming in C, but just couldn’t program in C++. What if the person doesn’t feel comfortable in programming in C++, but is a genius at, say, Common Lisp, and eventually sells their business worth millions of dollars to the Yahoo! company?

(This actually happened once; see “The Old Joel on Software Forum – Yahoo Stores rewritten from Lisp to C++ and Perl” [curiously, this link is posted on Joel’s own site!]. According to the article “Exploring e-commerce for innovative products,” by Piotr Wozniak, the site sold for 45 million dollars.)

The idea that *the choice of the programming language matters* is not new. For example, Paul Graham deliberately chose Common Lisp for ViaWeb, the site he eventually sold to Yahoo! as “Yahoo Stores” for forty-five million dollars. In his essay “Beating the Averages,” he writes,

So you could say that using Lisp was an experiment. Our hypothesis was that if we wrote our software in Lisp, we’d be able to get features done faster than our competitors, and also to do things in our software that they couldn’t do. And because Lisp was so high-level, we wouldn’t need a big development team, so our costs would be lower. If this were so, we could offer a better product for less money, and still make a profit. We would end up getting all the users, and our competitors would get none, and eventually go out of business. That was what we hoped would happen, anyway.

What were the results of this experiment? Somewhat surprisingly, it worked. We eventually had many competitors, on the order of twenty to thirty of them, but none of their software could compete with ours. We had a wysiwyg online store builder that ran on the server and yet felt like a desktop application. Our competitors had cgi scripts. And we were always far ahead of them in features. Sometimes, in desperation, competitors would try to introduce features that we didn’t have. But with Lisp our development cycle was so fast that we could sometimes duplicate a new feature within a day or two of a competitor announcing it in a press release. By the time journalists covering the press release got round to calling us, we would have the new feature too.

It must have seemed to our competitors that we had some kind of secret weapon– that we were decoding their Enigma traffic or something. In fact we did have a secret weapon, but it was simpler than they realized. No one was leaking news of their features to us. We were just able to develop software faster than anyone thought possible.

Some might argue, “Well, Paul Graham was a genius, and what was important was that he just happened to be a genius, not that he chose such a language as Common Lisp. What is really important is the the programmer must be a star programmer, not that the language be similar to Lisp.”

However, the choice of the programming language can be a decisive factor in whether the person becomes interested in programming in the first place. In the excerpt “High School Computing: The Inside Story,” Natasha M. Chen writes,

In the four months it took me to complete my course in Scheme, I learned more about computer programming than I had in my two years of Pascal. In less than five minutes after I began reading the text, almost everything I learned more than three years previously in our aborted Logo course came back to me. Five minutes, not the two days it took to recover from just one summer away from Pascal. There were hardly any rules of syntax to remember. Furthermore, throughout the entire four months, I never touched a computer. The ease of learning and using Scheme gave me such confidence in the programs I wrote that I didn’t feel the need for the security of a compiler to check my work.

Let’s compute a little: It took Chen four months of studying Scheme (another dialect of Lisp) to learn more about computer programming than she had learned in two years of Pascal. Two years is twenty-four months, or six times four months, for six times the learning speed. So choosing Scheme over Pascal resulted in a six-fold learning speed increase.

Furthermore, the choice of Scheme over Pascal influenced her decision to return to learning programming. She writes,

After my sixth grade BASIC experience, I never wanted to take another computer course again. Of course, when you are eleven years old, G.P.A. and class rank don’t mean much to you. But by the time I was about to enter my junior year in high school, I started thinking about those things … and college … and the classes I needed to take. To avoid another BASIC nightmare, I decided to bypass Computer Programming I (BASIC) and go straight into Computer Programming II (Pascal). Pascal was different enough from BASIC to make me think that it had to be better. I found out that the improvement was far less than I had hoped. We jumped right into the syntax of Pascal: program (input, output), begin-end, etc. Even after two years of studying Pascal, I still can’t remember all the rules.”

She continues,

As a senior, I had a study hall period that I sometimes spent in my math classroom doing homework. It was on one of these days that I happened to overhear my math teachers talking about Scheme. I was already tearing my hair out in my Pascal class trying to learn something for the upcoming A.P. exam—in fact, all I was learning was the page number of the reference section in our textbook, which I frequently consulted to see whether type declarations or variable declarations came first or to re-check how to declare a record for a linked list. Enticed by what I heard, I willingly gave up my study hall to come in four days of every week to learn Scheme on my own for no credit at all, using `The Schemer’s Guide’ [1]. My reward was that I regained the enthusiasm and interest I thought I had lost six years earlier.

So Scheme enabled her to regain the “enthusiasm and interest” in programming that BASIC had almost caused her to lose six years earlier.

I myself had a similar experience, albeit with different programming languages. I first started out in programming approximately six years before matriculation, in N80-BASIC on an NEC PC-8001 mkII personal computer in circa 1983. In college, similarly to Chen, I also had the misfortune of learning Pascal in an introductory course intended for non-majors (I had not yet decided to major in computer “science”). One of the assignments required writing pointers in Pascal. That felt like doing gymnastics in a straitjacket. I had to spend so much time and effort focusing on syntax that I couldn’t concentrate on the algorithm.

Just to illustrate that different languages suit different programmers, I had a different experience than Graham with Common Lisp. My first course for majors in computer “science” in fall 1991 required programming in both Common Lisp and Scheme. The professor, Drew McDermott, actually gave us a handout entitled “Common Lisp for Schemers” outlining differences between the two languages. Unlike Graham, I felt uncomfortable in Common Lisp because using that language required looking up almost every function in a library reference book that was over a thousand pages long. I read slowly and have poor memory, so using that reference book so frequently forced me to worry more about syntax again than about the algorithm. However, Scheme was different: The entire R5RS specification fit in 50 pages, and there was no hefty library reference book. Programming in Scheme was fun. About three years later, when I audited a later version of the course under a different professor as a refresher, a TA actually commented that I wrote a better program for one assignment than he himself had.

The choice of the programming language does matter. In fact, it can make a crucial difference, not just in programming efficiency, but in basic motivation as well. People tend to do better at what they enjoy doing.

To sum: Yes, Professor Stanley Eisenstat, I do agree that there is a vast, insurmountable difference between good and great programmers. I also believe that there is a vast, insurmountable difference between using a programming language in which one feels comfortable and one in which one doesn’t, that professors have no business in telling students what they should be interested in, and that there is no correlation between interest level and ability level. Hey, Paul Gauguin couldn’t paint, but he still became famous as a great painter. So it is with great artists.

Category Theory for the Mathematically Impaired: An Outline of A Short Reading List for “Mathematically-impaired Computer Scientists Trying to Learn Category Theory”

While reading through this week’s Issue 160 – December 1, 2010, of the “Haskell Weekly News,” by Daniel Santa Cruz, on the Haskell-Cafe mailing list, I came across the following interesting blog entry describing a reading list for “mathematically-impaired computer scientists trying to learn category theory”:

Category theory for Haskell programmers | Geoff Hulette

In this list, Hulette describes himself as “a computer scientist and Haskell programmer without a strong background in advanced math.”  Now, I am always interested in writings on category theory for students of Haskell who are not necessarily mathematicians.

He started out by recommending Brent Yorgey’s “Typeclassopedia.”  While I can understand the motivation for beginning a reading list with a type class description, and have much respect for Brent Yorgey, I would have preferred to have this reference provided later in the list, and instead begun the list with a more elementary introduction to category theory per se, perhaps along the lines of Conceptual Mathematics: A First Introduction to Categories, by F. William Lawvere and Stephen H. Schanuel (of which I have a copy of the first edition).  The main problems with Lawvere’s book are that it is not related to Haskell, and proceeds somewhat too slowly (for me).  Hulette does provide a reference to this book, but only at the end of his reading list.

Although Hulette does provide a reference to Benjamin Pierce’s Basic Category Theory for Computer Scientists, he neglects to provide one for the book Category Theory, by Steve Awodey.  Awodey’s book has been highly recommended as an introductory book for category theory, and probably should be included in any such list.

Hulette does, however, provide an interesting reference for a collection of tutorial videos on category theory by Eugenia Cheng.  I had not known of these videos before, and it can often be helpful to supplant written material with audio/visual supplements.

Hulette also recommends browsing through the category-extras package on Hackage, and reading the type annotations to provide intuition.

But then he includes a suggestion to look at Saunders Mac Lane’s Categories for the Working Mathematician.  Although frequently mentioned as the definitive text on category theory, the publication, as the title indicates, is intended for “the working mathematician”; as such, it may not really be suitable for “mathematically-impaired computer scientists.”  I deliberately left that title out of my own reading list, and was surprised to find it included in a reading list for non-mathematicians.

As of this writing, there have been nine comments added to Hulette’s post, some of which provide valuable references.  I would recommend taking a look at Hulette’s post and the follow-up comments.

If anybody else has any other suggestions on writings for non-mathematicans learning category theory in order to help in learning Haskell, please feel free to leave a comment!

Addendum of Monday, December 6, 2010, at 01:42 AM:

The user solrize has offered the following addition to this list:

Haskell/Category theory – Wikibooks, collection of open-content textbooks

The above Wikibooks site provides an “intuitive feel for what the concepts of category theory are and how they relate to Haskell.”  Topics described include an introduction to categories, followed by descriptions of functors, monads, and the monad laws and their importance.  Exercises are included, and drawings illustrate category theory concepts.  The site introduces Hask, the Haskell category, and later describes functors on Hask, followed by a brief translation of categorical concepts into Haskell.

Of special interest in this site is the description of the origin of Haskell monads in category theory.  Monads tend to be rather difficult to understand when approached in Haskell; however, category theory can help to clarify the concept.  In particular,  the site ties together the category theoretical definition of a monad with the corresponding Haskell definition as follows (quoted from the site):

A monad is a functor M : C \to C, along with two morphisms for every object X in C:

  • \mathit{unit}^M_X : X \to M(X)
  • \mathit{join}^M_X : M(M(X)) \to M(X)

When the monad under discussion is obvious, we’ll leave out the M superscript for these functions and just talk about unitX and joinX for some X.

Let’s see how this translates to the Haskell typeclass Monad, then.

class Functor m => Monad m where
  return :: a -> m a
  (>>=)  :: m a -> (a -> m b) -> m b

The class constraint of Functor m ensures that we already have the functor structure: a mapping of objects and of morphisms. return is the (polymorphic) analogue to unitX for any X. But we have a problem. Although return’s type looks quite similar to that of unit, (>>=) bears no resemblance to join. The monad function join :: Monad m => m (m a) -> m a does however look quite similar. Indeed, we can recover join and (>>=) from each other:

join :: Monad m => m (m a) -> m a
join x = x >>= id

(>>=) :: Monad m => m a -> (a -> m b) -> m b
x >>= f = join (fmap f x)

So specifying a monad’s return and join is equivalent to specifying its return and (>>=). It just turns out that the normal way of defining a monad in category theory is to give unit and join, whereas Haskell programmers like to give return and (>>=). Often, the categorical way makes more sense. Any time you have some kind of structure M and a natural way of taking any object X into M(X), as well as a way of taking M(M(X)) into M(X), you probably have a monad.

This kind of correspondence between category theoretical monads and Haskell monads helps to clarify the concept of a monad in Haskell. Whereas

class Functor m => Monad m where
  return :: a -> m a
  (>>=)  :: m a -> (a -> m b) -> m b

by itself can be confusing,

  • \mathit{unit}^M_X : X \to M(X)
  • \mathit{join}^M_X : M(M(X)) \to M(X)
  • together with

    join :: Monad m => m (m a) -> m a
    join x = x >>= id
    (>>=) :: Monad m => m a -> (a -> m b) -> m b
    x >>= f = join (fmap f x)

    can be less confusing because join and (>>=) can be recovered from each other.

    Dealing with Intractability and Other Fun Topics: _Computers and Intractability: A Guide to the Theory of NP-Completeness_: The Missing Title from My Advanced Algorithms Course

    Back in college, the topic that I spent the bulk of my computer science-related studies in preparing for was the design and analysis of algorithms. More specifically, it was actually my introductory course (Computer Science 365a: Design and Analysis of Algorithms) that required the most preparation:  introduction to computer science, discrete mathematics, meta-logic, data structures and programming techniques, set theory, recursive function theory (also known as “computability theory“), and other topics. Included within “discrete mathematics” was elementary set theory, graph theory, propositional calculus, discrete probability theory, some elementary number theory, mathematical games, and other topics as well.

    However, compared to the introductory algorithms course, the following algorithms course (Computer Science 465b: Advanced Algorithms) actually required less following preparation, and was in fact much less overall work. There were only four problem sets (one of which, however, was so difficult that I stayed up all night thinking about all three problems and was unable to solve any of them), and most of the reading involved relatively short research monographs.

    As a result, the advanced algorithms course stuck out much less in my memory, and until very recently, I was unable to remember the title of one famous book that I had remembered very well at the time, but which I did not use too often, yet still wanted to recommend as a class to those who are interested in studying algorithms.

    Recently, however, in reviewing Rice’s Theorem, I was finally able to remember the title: Computers and Intractability: A Guide to the Theory of NP-Completeness, by Michael Garey and David S. Johnson. The book is reportedly the first title ever written on the theory of NP-completeness and computational intractability, and was relatively easy to read for the subject material.

    In the opening, the authors give a somewhat humorous story of a person who is in the predicament of having been asked to come up with an algorithm to solve a problem, but who is unable to solve it. What does he do to keep his job? Well, he explains to his boss, “I can’t find an efficient algorithm, but neither can all these famous people.” The book explains that this is the fundamental motivation for proving that a problem is NP-complete (see below for an explanation of this term): that is, that if a problem can be shown to belong to the class of NP-complete problems, then it is equivalent to an entire class of other problems for which no polynomial-time (i.e., efficient) solution has yet been discovered, and hence the problem is unlikely to be solved easily.

    Informally, a problem is NP-complete if and only if it satisfies the following two criteria:

    1) It belongs to NP (the class of decision problems that can be solved on a non-deterministic Turing machine in polynomial time), and

    2) it is NP-hard (that is, it is at least as hard as the hardest problems in NP).

    More informally, a problem is NP-complete if and only if:

    1) It is possible to verify a yes-answer efficiently, and

    2) any of the other NP-hard problems can be shown to be equivalent to it (via an efficient (i.e., polynomial-time) conversion of the inputs).

    What is neat about the class of NP-complete (a.k.a. “NPC”) problems is that although they may all seem different, they are in fact equivalent in the sense that if any one of them can be shown to be solvable quickly (i.e., in polynomial time), then so can the rest.

    What this means for the working programmer is that if the boss gives the programmer an unreasonable problem to be solved quickly using a computer program, but he/she is able to show that the problem is NP-complete, then the programmer can just tell the boss, “Well, I wasn’t able to write a program to solve the problem quickly, but neither were a lot of other researchers who also tried to solve the same kind of problem, so no matter who you ask, you are very likely to get the same answer.”  In this case, the boss is unlikely to fire the programmer.

    One useful aspect of this title is that it includes an appendix with a compendium of NP-complete problems.  By reducing any of these problems to the problem to be solved, one can show that the problem is NP-complete, and hence, unlikely to be solved quickly by any algorithm implemented on any computer program anytime soon.

    Virtually Un-Squeaking: The Paradoxical Link Between Exploring Virtual Worlds, But Studying Haskell and Scheme Instead of Squeak

    For the past year and a half or so, I have, on and off, attempted to study Squeak in order to learn how to write in Open Cobalt for the eventual purpose of writing my own virtual world, but for some reason, every time that I start to learn Squeak, I have felt vaguely uncomfortable in working in an object-oriented world, and have soon returned to the functional worlds of Haskell and Scheme.

    At first, I thought that the reason had to do with my exposure to the functional programming paradigm, and even once made a paradigm shift. However, despite the paradigm shift, I still gradually returned to Haskell and Scheme.

    Just now, after watching part of a talk by Rich Hickey, entitled “Clojure for Lisp Programmers Part 1,” I think that I may have discovered at least part of the real reason for this vague preference. During the talk (from 0:18:47 to 0:18:51 on the video), Hickey offhandedly mentioned, “I’m sort of a Common Lisp guy. I’m not a Scheme guy.”

    This comment started me on a train of thought about what I felt about the differences in the styles of programming in Common Lisp vs. Scheme, and how these related to the differences between Clojure and Haskell, and eventually to the differences between Squeak and Haskell as well.

    Granted, Common Lisp and Scheme have many similarities: They both use S-expressions in their syntax. They are both based on Alonzo Church‘s lambda calculus. They are both ultimately descended from the Lisp programming language invented by John McCarthy in 1958 at MIT. They are both dynamically typed. In addition, they both use lexical scoping for variables.

    However, Common Lisp and Scheme are significantly different dialects of Lisp. Firstly, Scheme is a Lisp-1 dialect, whereas Common Lisp is a Lisp-2 dialect. (The difference between the two is that Lisp-1 dialects share a single namespace for functions and values, whereas Lisp-2 dialects have different namespaces for the two. For a technical explanation of this difference, see the paper “Technical Issues of Separation in Function Cells and Value Cells,” by Richard P. Gabriel and Kent M. Pitman, which appeared in the journal Lisp and Symbolic Computation, Volume 1, No. 1, June 1988, on pages 81-101.) Secondly, Scheme supports tail call optimization and first-class continuations and places a relatively heavy emphasis on recursion, whereas Common Lisp supports more special-purpose iterative looping constructs and has more standard common libraries. However, Common Lisp has no way of evolving, whereas Scheme has an RnRS (“Revised^n Report on [the Algorithmic Language] Scheme”) standardization process.

    In particular, the emphasis on special-purpose constructs and standard libraries has a tendency to give many books on Common Lisp a more practical flavor, whereas the emphasis on tail call optimization, continuations, and recursion tends to give many texts on Scheme a more academic or theoretical flavor.

    One of my mathematics professors in college once said, “Mathematicians don’t care about the real world.” Aside from whether this is true or not, this statement was actually one of my main reasons for switching from disliking mathematics to enjoying it: I myself don’t really care about the real world. When in college, I enjoyed studying algorithms, programming language theory, haiku poetry, and epic poetry, but disliked economics and social sciences. In my spare time, I enjoyed virtual reality environments, fantasy, and science fiction, but disliked business and generally anything related to monotonous social interaction or money. In short, unless a topic was related to some kind of theory, fantasy, or hypothesis, I wasn’t very interested.

    This goes back to Haskell vs. Clojure. Being more of a writer than a programmer, I tend to enjoy discussion about programming language theory. While I have participated on both the Clojure Google group and the Haskell-Cafe mailing list, I have noticed that the latter has a much higher proportion of discussion focusing on relatively theoretical topics, such as semantics and category theory. I think that this, rather than differences between the languages per se, is one of the main reasons that I subconsciously prefer discussing Haskell to discussing Clojure.

    This, in turn, goes back to Haskell vs. Squeak. Paradoxically, although it is the combination of my interest in experiencing virtual reality environments and the role of Squeak as the basis of such virtual reality development tools as Croquet and Open Cobalt that first brought me to Squeak, it is precisely my interest in the theoretical and fictional which also turns me away from Squeak and toward Haskell. Most of the discussions on the squeak-dev mailing list tend to be of a practical nature, concerning specific issues with using Squeak. However, I tend to be turned off by relatively mundane discussions. I’d much rather discuss, say, the philosophical basis of Squeak in Richard DawkinsThe Selfish Gene than, say, how to get Squeak to output sound. However, such discussions are relatively few and far between in Squeak discussion-land.

    On the Haskell-Cafe, however, it is very common to read discussions about why, say, monads are like burritos. Such discussions remind me of finding the result of (cons ‘peanut ‘(butter and jelly)) on page 7 of The Little Schemer. It is common to find discussions of food related to programming language constructs in documents on both Haskell and Scheme. (Virtual) food is fun. Programming by itself can become monotonous after a while, but programming about food helps keep the doldrums away.

    Speaking about (virtual) food, time to get a Rolanberry Pie for my avatar. Now if only my avatar could then program in virtual Haskell or virtual Scheme about virtual monad-burritoes and virtual ‘(peanut butter and jelly)….

    ANN: Twitter Account

    As of Sunday, April 18, 2010, those of you who are interested may now read Benjamin L. Russell (DekuDekuplex) on Twitter.  There, since each post is limited to 140 characters, I plan to jot down frequent notes on anything interesting I discover.  Feel free to write to me there as well!

    While setting up my account there, since I wasn’t satisfied with any of the default themes, I searched and managed to find some exquisite fantasy-theme backgrounds at  For my current theme, I have chosen Blacklight Fantasy, which reminds me of fractals.  What do you think of it?