New Website!

We have a new website and a new logo for our community. Go Smalltalk!

Smalltalk was designed for Kids!

Yes! Alan Kay was trying to develop an environment to be used in the education of our kids.

Did you Know that Smalltalk was created in 70's at Xerox?

The use of the mouse, the "copy and paste", the bitblt and others technologies was firstly created in Smalltalk. Steve Jobs saw those ideas at Xerox and he developed a new language, Objective-C.

Mailing List in Spanish!

Please, go to http://groups.google.com/group/clubsmalltalk and join us!

October 26, 2011

Smalltalk 2011, your conference

Did you know that the Smalltalks Conference will be held again this year? (http://www.fast.org.ar/smalltalks2011)

Maybe you’re wondering why you should care about a conference that deals with a language you most certainly don’t use for your job, and of whose existence you may not even be aware. But if you take 5' to read this I can tell you that you won’t regret it and that after that you will probably be eager to come to the conference :-)

To begin with, this conference isn’t only about a programming language, it’s about a technology and a development culture which still has a wide influence on our profession. For example, last year Gilad Bracha came to the conference. Who is Gilad Bracha? Maybe the name rings a bell... well, that’s because he’s one of the people behind Dart, Google’s new language (http://www.dartlang.org/). And what does that have to do with Smalltalk? Precisely: Gilad Bracha was one of the creators of Strongtalk (http://www.strongtalk.org/), the fastest Samlltalk at that time, which used adaptive compilation, Polimorphic Inline Caching (PIC), optional variable typing, etc. - all of which are being implemented now in Dart. This year one of his closest collaborators will visit us, Vassili Bykov, who implemented the UI of Newspeak, the latest language he has been working on.

But Smalltalk has to do not only with what’s happening with Dart, but with Ruby as well... Have you heard about MagLev? (http://ruby.gemstone.com/) It’s Ruby’s server for transactional and automatically persistent objects. Guess where that comes from... MagLev is the implementation of Ruby running on GemStone/S, a Transactional and Persistent object server for Smalltalk that is more than 25 years old and that has been bought by VMWare because of its great potential as a transactional memory manager for Java. If you didn’t know, read here: http://www.springsource.com/products/data-management/gemfire65

And where does the conference come in here? Martin McClure himself, responsible for MagLev, and GemStone architect Norman Green are coming! The guy in the know! Do you have any doubts about object bases? Now you know where the answers are.

But maybe you’re not interested in any if this... perhaps the way programming languages work or how they are implemented is not your scene, you just make web applications and only want the infrastructure to scale, to be quick at persisting information etc. In that case we also have a place for you. Have you heard of GLASS? (http://seaside.gemstone.com/) It’s the implementation of Seaside, a dynamic framework based on continuations for web applications using GemStone! (http://www.seaside.st/). That’s to say, you develop a web application just like a desktop application, and you get transactionality and persistence at the object level, for free... and what’s better, without relational databases!!! Yes!!!! No more hibernating, no more SQL, no more tables, only objects! It may sound crazy, you may think it doesn’t make sanse... my advice is, don’t draw any conclusions until you come and hear Dale Heinrichs, who is in charge of this product and who will tell us all the details and explain how it is impacting web developments.

Not convinced yet? OK, let me try just a bit more... Do you know Alan Kay? Turing award, “father of the personal computer”, creator of Smalltalk? (http://en.wikipedia.org/wiki/Alan_Kay) No, he’s not coming - yet. But Ian Piumarta and Kim Rose are, two of his closest collaborators in the projects he’s currently working on at his foundation dedicated to minimal programming languages such as OMeta and learning environments like SqueakLand (http://www.vpri.org/index.html). Are you interested in the use of computers for teaching? You can ask Kim. Would you like to know how a good VM is implemented? Ian will be right there to tell you.

If you’re still reading and haven’t scrolled down to the bottom of the page it means I haven’t convinced you yet... hmmm, let’s see what you say about this: MOOSE (http://www.moosetechnology.org), a platform for analyzing your programs - no matter if they are written in Java, C++, C# or Smalltalk, you can visualize your system’s design, not by using those little UML diagrams but by means of graphics specially designed to let you spot at a glance some bugs that may have crept in. Its developer, Tudor Girba, will be there to explain how it works, how it was developed and what you can do with it, because it’s free!

Maybe you’re already tired of reading. I don’t blame, but don’t you blame me either! It’s a great conference! You just can’t miss it! Because this is not all... if you want to find out more about the main Smalltalk development environments, both open source and commercial, you will have the chance to talk to Markus Denker from Pharo (http://www.pharo-project.org/home) and John O’Keefe, architect of VASmalltalk (http://www.instantiations.com/).

A little too much stuff that is industry-oriented? And what about research, do they keep researching on Smalltalk? Well, let me tell you that this will be the second year the conference has a section devoted entirely to research, with an enviable review committee and publication in journals. This way, if you’re doing research on objects and need to present your work at a widely recognized conference, Smalltalks is your place. And I wouldn’t like to forget the university... Smalltalk is still the language used to for teaching objects at almost every university, instead of a merely commercial language.

But I haven’t told you yet about the most important part of this, besides all these people that will be visiting us and with whom we can share our experiences: The Argentine Smalltalk community, one of the most important worldwide concerning this technology. This community has been putting their best efforts during the last five years to organize these conferences, and the last three years some of its members have won the 1st and the 3rd place at the ESUG Technology Award, an international award for the best developments in Smalltalk! And the best part of it is that the winners are from different universities - the UBA, the UAI and the UTN!

The community does not stop there. Did you know that there is an Argentine Smalltalk? A Smalltalk developed by an Argentinian and which is being used all over the world? It’s called CUIS and it was developed by Juan Vuletich (http://www.jvuletich.org/Cuis/Index.html), who is also working on the Morphic 3.0 project, and worked together with Alan Kay developing Squeak. Or did you know that the most widely used layer open source for communicating with relational databases from Pharo or Squeak was developed by an Argentinian too? Or that Fuel, the open source object serialization framework, was also created by an Argentinian? Are you familiar with these names - Mariano Martinez Peck, Guillermo Polito, Martín Dias, Esteban Lorenzano, and others? They are also part of our community and a constant reminder of the excellent technical quality we have in our country. Another Argentinian is the architect of the fastest Smalltalk VM in existence, that of VisualWorks (http://www.cincomsmalltalk.com/main/products/visualworks/), and you can ask him how he was able to speed up the GC about 70% during the last year, and you will have the chance to listen to another Argentinian who has been a Smalltalker for more than 20 years... Can you imagine what your productivity would be if you had been working for 20+ years on the same language? On a language that keeps being productive to our profession? These people are part of this great community and will also be at this wonderful meeting... which is important not only for the people who are coming, but also for those that are already here!

I hope I’ve been able to convince you. I hope you’ve realized that this is not a conference about a programming language, but about a community of developers who want to share with you all they know, and also learn from you. If you want to help this community to keep growing, if you want this to be not just a conference of developers but also for developers, sign up here: http://www.fast.org.ar/smalltalks2011

It’s free, and I can grant you that you won’t regret it. You can see the list of talks at: http://www.fast.org.ar/smalltalks2011/talks

This year it takes place on November 3-5, at the University of Quilmes. And it won’t be restricted to the world of objects: this year we will also have a talk on objects by Fidel (Pablo E. Martínez López), one of the leading Argentinians in the field of Functional Programming, a community that shares the conviction that we are all, after all, programmers! See the response it is already getting: http://vimeo.com/30529851

We’ll be waiting for you!

FAST
http://www.fast.org.ar
 

October 16, 2011

Mariano Martínez Peck, winner of the 2011 edition of the Esug's Innovation Technology Awards.

Mariano Martinez Peck is an argentinian PhD Student at Ecole des Mines in association with RMOD-INRIA. . FUEL has been the winner of the 2011 ESUG edition of the Innovation Technology Awards, and Mariano is one of it developers. Here is a little talk about FUEL and his present work.

CS: What was the motivation to create a new serializer?
MMP: Well, this is an excellent question since most people's reaction when we announced Fuel was: "Yet another object serializer?". The truth is that I am doing a PhD with Stéphane Ducasse and others and, from the very beginning of my PhD, it was clear that for my solution I needed a good-designed, reliable, flexible, uniform and very fast serializer. I needed a serializer that I could understand, change, adapt for me needs and, mostly, I needed (because of my PhD domain) a serializer able to serialize all type of objects including classes, compiled methods, closures, contexts, traits, etc. At the same time, it was extremely important to make it fast. The main goal of the serializer needed to be the performance and not, for example, the portability as happens with other serializers. I checked all the serializers available for Pharo (since my PhD prototype is based in Pharo) and none of them met my expectations.

Stef also wanted a fast binary serializer to provide a future infrastructure for Monticello. I didn't have time to do my PhD and, at the same time, do the serializer so he decided to help me by asking Tristan Bourgois to make Fuel from scratch. Just a couple of weeks later, Martin Dias, from Universidad de Buenos Aires in Argentina, came to Lille for a 4-month internship. The team decided that Martin could also work on Fuel and use it for his thesis. Few months later, when I was starting to need the serializer for my PhD, I jumped directly into the team and I have been helping them since then. Tristan is not working anymore in Fuel so Martin and I are the current developers now.

Once Martin finished his internship and come back to Argentina, ESUG decided to sponsor him through the ESUG SummerTalk project. He is the student in such a project and I am currently taking the role of "mentor". So we should thanks ESUG for such a sponsorship. 

CS: Fuel is clean, platform agnostic and incredible fast in some scenarios. What were the topic most difficult to resolve in the framework?
MMP: The key characteristic in Fuel is the usage of a specific type of pickle algorithm. The only Smalltalk serializer that we are aware that uses such technique is VisualWorks Parcels. However, Parcels can be better described as a serializer for managing code than as a general purpose object graph serializer. Fuel is not focused in code loading and is highly customizable to cope with different objects. Fuel is the infrastructure on top of which you can then build other tools.

So, the pickle algorithm/logic in itself was not complicated since it is well known and there are papers or references about it. The main challenge was how to build a real object oriented approach for such algorithm. How to find the correct abstractions and hierarchies, where to put each responsibility, and all related questions that make a better design. It is also difficult  to maintain a good design without loosing performance. I think in Fuel we have a very clean and object oriented solution while having a good performance as well (it is really important to have a large set of benchmarks as we have).

Another complex topic was being able to serialize all type of objects because you have to know which objects are "special" and how they are represented internally. How to encode and decode the objects in a stream was difficult for us as well. Neither Martin or I are experts in streams nor in optimizing code, so we have learnt a lot about it in the process. 

CS: What is the pickle algorithm?
MMP: I think that, sometimes, there is some kind of confusion regarding this term. "Pickling'' and ``Unpickling'' are synonyms for "Serializing" and "Deserializing". In Fuel, we use the terms "serialize" and "materialize" (deserialize). In addition, we call "pickle" to the algorithm or format we use to encode or decode the objects in the stream.

It is a little bit complicated to explain Fuel pickle format in a couple of lines but I will do my best.

Traditional pickling formats take the object graph to serialize and, while traversing it, they serialize the object plus an identifier of its type into a sequence of bytes (Note that the type is usually its class but not necessary). The unpickling then starts to read objects from the stream. For each object it reads, it needs to read its type as well as determine and interpret how to materialize that encoded object. The materializer needs to determinate the type, search what it needs to do with it and perform the materialization. So, in the common case of a regular object, it will read its type and then it will need to get its class from the system and send #basicNew in order to get a new instance. Then, of course, it will fill its instance variables.  This unpickling is terrible slow because it means a lot of work for every single object. In other words, the materialization is done recursively.

Fuel pickle format is completely different. There is a first traversal of the graph (we call this phase "analysis") where each object is associated with an specific type which is called "cluster" in Fuel. As a result of the analysis phase, we have a list of clusters and, each cluster, contains the list of objects that belong to it. After that, we proceed to serialize. However,  there is another key aspect: the serialization is split in serialization of instances first and, then, references. This means that, first, we only serialize the instances (nodes of the object graph), and, then, all the references. This is different than the regular serializer that encodes both things together. Notice that, if an object is all references (those objects that are not variable), then nothing will be written in the "instances part" and everything will be in the "references part".  In the stream, we encode how many clusters there are and how many instances each cluster has.

During materialization, we first materialize the instances. Since all the objects of a cluster have the same type, we write/read that information in the stream only once. The materialization can be done in a bulk way which means that we can just iterate and instantiate the objects. Once we have finished with the "instances part", we continue with the "references part". Here, we iterate and set the references for each of the materialized object. In other words, the materialization is done iteratively.

So....the conclusion is that Fuel materialization is so fast because it can be done iteratively. To do that, we need to serialize instances separated from their references. This also means that we are a little bit slower during serialization as we need to map objects to clusters. Nonetheless, all benchmarks show that Fuel is the fastest serializer in materialization and still one of the fastest ones in serialization.

CS: How do you resolve the references in a serialization?
MMP: We encode references in Fuel by using an integer that denotes the position of the referenced object inside the stream. Then, during materialization, we can read that integer and we know exactly in which position is located the object we are looking for.

CS: What happens with the identity of an object? In other words, When an object it is materialized, is it the same object or it is a clone from the original?
MMP: It depends on the object you are need to serialize. For regular objects, yes, the identity changes and the materialized objects will be like a clone of the original. In fact, some consider a serialization as a very deep copy.
Now, Fuel supports what we call "global objects". Imagine that you serialize a graph that contains a reference to Transcript. You don't want to serialize the instance Transcript and then, during materalization, get yet another Transcript instance in your system. You want to use the same.

Global objects are not written into the stream. Instead, the serializer stores the minimal needed information to get the reference back at materialization time. In this example, we just store its global name to get the reference back during materialization. The same happens with the Smalltalk class pools and with classes. This means that, at materialization, all the classes and globals have to be present in the image.

The previous is normally the expected scenario. However, Fuel does support real serialization of classes. This means that Fuel can take a class and correctly serialize it together with its method dictionary, compiled methods, superclass, subclasses, etc. Of course, this is not the default behavior (the default is considering classes as globals) but the API will let you do that. In fact, this is needed for a small proof of concept we developed to manage Monticello packages with Fuel.

If I said ... Would you answer
Sports?
Soccer
Food?
Asado
Computer brand?
self isPayByEmployee
       ifTrue: [ Mac ]
       ifFalse: [ computers anyOne ]
Operative system?
self amIInMac
       ifTrue: [ MacOS]
       ifFalse: [ Ubuntu ]
Mobile Phone?
Android
City?
Miami
Book?
Lord of the Rings
Film?
Back to the Future
TV Series ?
The Big bang Theory
Magazine?
No one these days...only papers and blogs.
Car?
My mother-in-law’s Ford Focus. I have a special relationship with that car!
Open Source?
Sure! As much as I can