Organization-based Programming

Proposal: *

Organization-based programming = Organize agents, objects, and messages exchanged between them, to work on a goal or a process (i.e. “desired outcome”).

Inspiration:

Lemmings, DNA, construction sites, Impossible Creatures.

Definitions:

  • Agents are actors that have less predictable behavior.
  • Objects are actors than have predictive behavior.
  • Both agents and objects are actors.
  • Actors can send and receive messages.
  • Messages contain content and language. Message content, can be actors themselves (meaning code, not just data).
  • Note: No notion of “interpreters”. Interpreters are just strictly behaved objects that, given a message that is source code, executes the parts of the message on behalf of the message author, or itself (depending on privileges etc.).

Actor attributes:

  • Smartness: Smart (less predictable behavior) vs. Dumb (predictable behavior). Objects are strictly dumb actors.
  • Activeness: Active (continuously running) vs. Passive (runs when called by a thread scheduler).
  • Initiativeness: Initiative (does operations even without prior command) vs. Reactive (requires command).
  • Collaborativeness: Social (works with others) vs. Individual (does things internally)
  • Memory: Mindful (stores experience) vs. Forgetful (always have a blank state).

Example

Agent A is a view mediator. Given a message content that has data and view destination, the agent needs to modify the view so it will display data. For now, agent A only knows how to handle the DataGrid destination.

Then comes a message that specifies a ListBox destination. Agent A is “confused”, so it needs to learn how handle a ListBox destination, or “throw an Exception” (which means sending a message back to the caller, if known, or simply “yelling” or “complaining”), which may or may not force it to learn ListBox handling anyway.

Agent A starts its lifetime with a job that is handling a message of type array of specific type, and passes it to a specific view component. Over time, it evolves to be a smart agent, to be “mediator” in a higher sense, that can handle more varyings kinds of data, and more varying kinds of view destinations.However, basically, agent A is still a mediator (as its “DNA”).

Memory vs. State

In this scheme of organization, the traditional object-oriented concept of “state” is not recognized

What’s recognized is memory, which is an abstract concept of a collection of experiences. An experience is defined as a way to handle a particular message, and its outcome.

* This proposal may be considered hypothetic, fictive, ridiculous, or simply imaginary.

PS:

Possible additional attributes from MyPersonality.info Personality Tests:
There are 16 distinct personality types, each belonging to one of four temperaments as organized by David Keirsey.

Protectors (SJ)

Creators (SP)

Intellectuals (NT)

Visionaries (NF)

Preferences

Functions

 
Like this article? Please share this article with others:

Semantic Thoughts on Model, View, Controller (MVC)

Semantic Web is a technology family (or, a disparate set of sometimes non-related technologies) that allows computing applications to understand each other. It has the potential of transforming the way we do computing in the very near future… and power to make software developers’ lives easier.

I’ve been having lots of random thoughts about how to connect independent apps to work, talk, and walk together. Several components need to be identified. I call them Actors, Messages, and Translators.

Actors are objects that can perform actions, in the traditional OO sense it has state and behavior. In Java EE or SOA they might be called stateless or stateful session beans. We can consider a web browser and a web server both are actors. The model, view, and controller in an MVC app are all actors.

To communicate with other actors, one sends a Message to each other. Traditional messages don’t have any behavior, and I’d rather think not (at least for the moment). Messages are basically data, but to be useful as “information”, it has to have content, and language. Examples of “languages” in this sense are Atom and RDF.

It can also have a message header, message envelope, or metadata, or a schema. However, I don’t consider those as “language”, because languages are identifiable, and can be inferred from message content (with some effort), while a metadata is usually not. For example, given an document, a script can be written to determine if it’s in Atom format. However, given a message body, it’d be though to determine its title or creation date if it’s not inscribed the message itself (somewhere).

Metadata, headers, and schemas also have languages. Therefore, actors need to understand these languages (as well) in order to process a document/message. In addition, languages themselves need to be know, not necessarily understood. A bit like saying, “I know you’re talking in Turkish, but I can’t understand what you’re saying. But at least I know you use Turkish.” Getting an actor to know what language a message is made of is a useful thing, as we’ll see later. A kind of universal language identifier is MIME types. If it can be inscribed somewhere of inferred in the message, the actor will at least know the next step of interpreting the message; if not, then it’d be much tougher.

Someone that can make two actors communicate in different languages is needed, Translator. They take a message from one actor, does whatever it takes to get the other party to understand this message. An example of a translator in the computing world is Mule, Apache Synapse, and OpenESB.

Actors need to agree upon use of translators, at least one of the parties and preferably both parties. Security is paramount when using translators. Messages may or may not be translated in full to translators. Partial translation shall be possible, i.e. translating parts of a message that the actor doesn’t understand, but not the entire message.

Translator may also gain non-repudiation privileges, and may perform limited set of actions on behalf of the requesting actor. Think of it as your lawyer.

Overall, some kind of cryptosystem is needed between the actors and translators. Otherwise, this kind of distributed knowledge architecture has lots of security holes of varying severities.

So, where do we start?

Like this article? Please share this article with others:

How to Print or Make PDF Files in Ubuntu/Linux

No paper needed if you have PDF!

Ever wonder how to create PDF files from your documents or the web pages you visit?

It’s very easy if you use Ubuntu!

Considering the success of my previous post about wireless-ing your ADSL internet connection, I guess it’s good to show you how to make PDF files easily with Ubuntu.

In short, go to your Ubuntu terminal then execute:

sudo aptitude install cups-pdf
sudo aa-complain cupsd

Then you can go to System menu, Administration, Printing and add a New Printer, pick the PDF driver. And you’re set!

Linux Printing PDF

You can print a test page or anything using your newly installed PDF printer. Your PDF files will be saved in the PDF folder inside your home folder (so it will be named with something like /home/salsabeela/PDF/somefile.pdf).

The sudo aa-complain cupsd command is very important: it avoids AppArmor restrictions for cupsd / cupsys (the printer server application in Linux). I recently stumbled across this problem myself, because my Ubuntu Gutsy laptop can create PDF files, but my Ubuntu desktop computer can’t. Example error messages that you may get (see your /var/log/cups/error_log and/or /var/log/cups/cups-pdf_log):

E [28/Jan/2008:22:17:32 +0700] cupsdAuthorize: Local authentication certificate not found!
Mon Jan 28 20:56:17 2008 [ERROR] failed to create directory (/home/ceefour/PDF)
Mon Jan 28 20:56:17 2008 [ERROR] failed to create user output directory (/home/ceefour/PDF)
Mon Jan 28 20:58:21 2008 [ERROR] failed to set file mode for PDF file (non fatal) (/home/ceefour/PDF/PPR_Test_Page.pdf)

Weird, but the quick solution is what I’ve described above.

PS: …which makes me wanna create a socially networked HowTo site. πŸ˜‰

Like this article? Please share this article with others:

Intuitively Probabilistic Programmer [wannabe]

thinking and drooling

You know what, I get the feeling that I’m somehow “destined” to be a “probabilistic guy”

(it has a spiritual touch)

A few minutes ago I was thinking that “IT” is simply about reducing ambiguity. which is basically increasing specificity. Problem is, the world is inherently uncertain.
And IT usually doesn’t cope well with this that changes a lot (hence the need for BPMs and such). IT is very good for things that are strictly in order, and certain, and known upfront.

The more we get to details, the more the uncertainty (and hence the ‘probability factor’) increases.

This is how the real world seems to be modeled, according to some scientists. On a macro level, the universe is calm and orderly (i.e. planets and such), and we have Newton law. On a very micro level though, beyond atoms, things get very ecstatic and we have quantum theory. Which is simply the technical term for ‘probabilistic uncertainty of matter or whatever’.

Then I was thinking, why am I so attached to this ‘probability’ thingy?

IT world currently don’t really have a “probabilistic” programming model. Ok, we have imperative. And it’d be a full chore to write probabilistic functions using if-then and loops and such. We have functional programming, which is rare-r and a bit more inline with ‘probabilistic’, but still, functional programming still expects a well-defined function to transform input to output.

What I think, is there should be a probabilistic programming model, which should perform well in some areas of the real world problems. I’m calling it… “Intuitive programming”. (“Intentional programming” is a term already snatched, and it’s different anyway)

What tools do we have to do probabilistic programming? I’m not sure. Never familiar with it.

My final bachelor paper was about Bayesian Theorem. See the match? I certainly didn’t “specify” Bayes as my goal when I was in college. It was just something that happens to be one of my only options when I needed to choose a final paper topic. (as a background, I never thought of probabilistic statistical theme either, I was just interested in folksonomy and del.icio.us at that time… I wonder what made me into Bayes) The result was very good for me, but I know behind the scenes that my paper wasn’t all that good. It’s lousy…

It’s probabilistic! “There’s an 60% chance that my paper was ‘good enough’ to deserve an A… and it did!” πŸ˜›

Too many choices. Too many probabilities. Has got to be intuitive.

It doesn’t mean it’s 60% good. It just means (assumptively) that even the paper was 90% bad, the probabilistic chance was good enough. Thanks Bayes! πŸ˜‰

Weird…

Like this article? Please share this article with others:

“Selling” for 0% Profit!


Today I just found one way (though not so “good idea”) to have a 45-day loan with 0% interest……!

I got approx 800,000 rupiahs today, cash, in less than an hour, that I can return around end January.. it was very easy.

I was shopping with my friends (actually my Entreprenur University Kediri classmates), at Gramedia book store. I’m not really sure if I want to buy a book, but seeing my friends take up a few books, I decided to buy a book and ask them, “hey, why dont you help me. what if you pay to me, and I’ll pay all of our books using my BCA credit Card?” In short, he said alright. And I start “marketing” my scheme to the other friends. So they joined.

I asked Gramedia if they had a discount. She said I’ll get 10% discount if my order is 1.5 million rupiahs. So I told my friends. Unfortunately some of the friends had already left, they said “Hendy why didn’t you told us earlier!!! :-P” and we didn’t get 10% discount. πŸ™

Anyways, the total purchase was 861,300 rupiah. I only purchased one book worth 35,000 rupiahs (Robert T. Kiyosaki Advisor’s Building A Business Team that Wins) and my friends each buy average of 3 books (yeah I’m sooo a cheater). πŸ˜›

Now, I get 800,000 rupiahs in cash… That I can use for whatever, that I’ll pay on my next credit card billing statement. It’s much better than taking the cash out of my credit card (with 3%-7% surcharge). Without any monthly interest at all πŸ™‚

Soo happy πŸ™‚

P.S.: Thanks to all the Entrepreneur University friends who joined my scheme: Ali, Udin, Susi, Lestari, Harli, Eko, and Ulfa!

Like this article? Please share this article with others:

OCaml: The Fastest Powerful Programming Language Ever?

OCaml seems to be a (yet another) very interesting programming tool.

Objective Caml (OCaml) is the main implementation of Caml (Categorical Abstract Machine Language), which is based on ML. The Meta-Language (ML) was originally developed at Edinburgh University in the 1970’s as a language designed to efficiently represent other languages. The language was pioneered by Robin Milner for the Logic of Computable Functions (LCF) theorem prover. The original ML, and its derivatives, were designed to stretch theoretical computer science to the limit, yielding remarkably robust and concise programming languages which can also be very efficient.

There is an interpreter which runs OCaml code in a virtual machine (VM) and two compilers, one which compiles OCaml to a machine independent byte-code which can then be executed by a byte-code interpreter and another which compiles OCaml directly to native code. The native-code compiler is already capable of producing code for Alpha, Sparc, x86, MIPS, HPPA, PowerPC, ARM, ia64 and x86-64 CPUs and the associated run-time environment has been ported to the Linux, Windows, MacOS X, BSD, Solaris, HPUX, IRIX and Tru64 operating systems.

Check out its massive features!

OCaml with Ubuntu Gutsy and Compiz Fusion
OCaml interpreter session on my computer.

Safety
OCaml programs are thoroughly checked at compile-time such that they are proven to be entirely safe to run, e.g. a compiled OCaml program cannot segfault.
Functional
Functions may be nested, passed as arguments to other functions and stored in data structures as values.
Strongly typed
The types of all values are checked during compilation to ensure that they are well defined and validly used.
Statically typed
Any typing errors in a program are picked up at compile-time by the compiler, instead of at run-time as in many other languages.
Type inference
The types of values are automatically inferred during compilation by the context in which they occur. Therefore, the types of variables and functions in OCaml code does not need to be specified explicitly, dramatically reducing source code size.
Polymorphism
In cases where any of several different types may be valid, any such type can be used. This greatly simplifies the writing of generic, reusable code.
Pattern matching
Values, particularly the contents of data structures, can be matched against arbitrarily-complicated patterns in order to determine the appropriate action.
Modules
Programs can be structured by grouping their data structures and related functions into modules.
Objects
Data structures and related functions can also be grouped into objects (object-oriented programming).
Separate compilation
Source files can be compiled separately into object files which are then linked together to form an executable. When linking, object files are automatically type checked and optimized before the final executable is created.

Time for some sample code, to calculate f(x)=x3βˆ’xβˆ’1:

# let f x = x *. x *. x -. x -. 1.;;
val f : float -> float = <fun>

According to Kevin Murphy, “… benchmarks … suggests the Ocaml compiler generates the second fastest code of any of the currently available compilers (gcc and the Intel C compilers being first). Given that Ocaml is also a beautiful language to program in, this is pretty compelling.”

Sources:

Like this article? Please share this article with others:

Oz Multiparadigm Concurrent Programming Language, The

I’m not sure about you, but to me Oz looks like a cool programming language to learn… and use:

Oz is a multiparadigm programming language, developed in the Programming Systems Lab at Saarland University.

Oz contains most of the concepts of the major programming paradigms, including logic, functional (both lazy and eager), imperative, object-oriented, constraint, distributed, and concurrent programming. Oz has both a simple formal semantics (see chapter 13 of the book mentioned below) and an efficient implementation. Oz is a concurrency-oriented language, as the term was introduced by Joe Armstrong, the main designer of the Erlang language. A concurrency-oriented language makes concurrency both easy to use and efficient.

In addition to multi-paradigm programming, the major strengths of Oz are in constraint programming and distributed programming. Due to its factored design, Oz is able to successfully implement a network-transparent distributed programming model. This model makes it easy to program open, fault-tolerant applications within the language. For constraint programming, Oz introduces the idea of “computation spaces”; these allow user-defined search and distribution strategies orthogonal to the constraint domain.

See it in action on my computer:

Oz Mozart in action

Far from bad, eh?

The language is pretty nice and clean, yet has advanced built-in features like concurrency… whoa…

thread
   Z = X+Y     % will wait until both X and Y are bound to a value.
   {Browse Z}  % shows the value of Z.
end
thread X = 40 end
thread Y = 2 end

The primary tool for developing Oz applications is Mozart Programming System.

So, now, anything interesting? πŸ˜‰

Like this article? Please share this article with others:

Semantic Interface Driven Architecture and Continuous Change Driven Development

The time has come for yet another wishful thinking. With the rise of Service Oriented Architecture (SOA) and Event Driven Architecture (EDA), and Test Driven Development (TDD) extended with Behavior Driven Development (BDD), and a bunch of other buzzwords… let me introduce something else for the enterprise world:

Semantic Interface Driven Architecture (SIDA)

In short, it’s a Model Driven Architecture (MDA), sprinkled with interfaces to reduce coupling of inter-model transformations, and semantic inferences in the spirit of topic maps and RDF+OWL, implemented on top of SOA and EDA.

MDA allows different services to communicate with each other by transforming models. The interfaces provide agreeing on specifications to common semantics. Semantics themselves are inferable, and navigable. Thus, it is possible to interrelate models even though they are entirely in different layers and/or (heterogeneous/external) systems.

Continuous Change Driven Development (CCDD)

In short, it’s a development approach where the requirements are constantly changing. Constantly, that is, as in “real-time”, in order of milliseconds. One millisecond you need to have this table, the next you have to add a column, the next you have to drop a whole table, and in the next you want a whole form, relationships…

Requirements are not specified upfront, but simply as a “starting point”. Much like the way (probably) the universe started during the Big Bang. Everything else is evolutionary, and can be changed in real time by the individual users of the application. It might also be named Real-time Evolution Driven Development (REDD), which is probably more buzzy.

Some of the general traits of this approach are:

  • extensive use of ultra meta-programming
  • taken-for-granted interoperability with other SIDA systems
  • fuzzy specifications/requirements (i.e. “want” instead of “what/how”)
  • generatively programmable systems
  • decentralized source code management (i.e. version control) is taken for granted

What?!?!

Let me know of your comments. If you are interested in doing research together, by all means please do. I’m serious.

Resources

Like this article? Please share this article with others:

Erlang: The Concurrent Programming Language

Thank you Orbitz for posting [Erlang vs.] Java and and Threads (Jetty):

The basic idea is, instead of using 1 thread per connection, since connections can last awhile, they use 1 thread per request that a connection has. The hope being, a connection will idle most of the time and only send requests once in awhile. The problem that they ran into is, a piece of software is using a request timeout to poll for data. So requests are now sticking around for a long time, so they have all these active threads that they don’t want. So to deal with this, they use a concept of continuations so the thread can die but the request still hang around, and then once it’s ready to be processed a thread is created again and the request is handled. So having all these requests hanging around that arn’t doing anything is no longer a problem.

ell, this begs the question, why are you using a dynamic number of threads in the first place if you are going to have to limit how many you can even make. If the problem, in the first place, is they have too many threads running, then their solution works only for idle threads doesn’t it? Being forced to push some of the requests to a continuation means they have applied some artificial limit to the number of threads which can be run. What happens then, when the number of valid active requests exceeds this limit? What then? Push active requests to a continuation and get to then when you have time? Simply don’t let the new requests get handled? If they want to to use threads to solve their problem then putting a limit on them seems to make the choice of threads not a good one. Too poorly paraphrase Joe Armstrong, are they also going to put a limit on the number of objects they can use? If threads are integral to solving your problem, then it seems as though you are limiting how well you can solve the problem.

This also got me thinking about other issues involving threading in non-concurrent orientated languages. Using a COL (Concurrent Orientated Language) all the time would be nice (and I hope that is what the future holds for us). But today, I don’t think it is always practical. We can’t use Erlang or Mozart or Concurrent ML for every problem due to various limiting factors. But on the same token, using threads in a non-COL sometimes makes the solution to a problem a bit easier to work with. At the very least, making use of multiple processors sounds like a decent argument. But writing code in, say, java, as if it was Erlang does not work out. I think the best one can hope to do is a static number of threads. Spawning and destroying threads dynamically in a non-COL can be fairly expensive in the long run and you have to avoid situations where you start up too many threads. I think having a static number of threads i a pool or with each doing a specific task is somewhat the “best of both worlds”. You get your concurrency and you, hopefully, avoid situations like Jetty is running into. As far as communication between the threads is concerned, I think message passing is the best one can hope for. The main reason I think one should use message passing in these non-COL’s is, it forces all of the synchronization to happen in one localized place. You can, hopefully, avoid deadlocks this way. And if there is an error in your synchronization, you can fix it in one spot and it is fixed everywhere. As opposed to having things synchronized all over the code, god knows where you may have made an error.

…although it seems not all his readers corroborate with what he meant by “concurrent oriented languages”.

I strongly concur that languages *such as* Erlang (I’m saying such as, because Erlang got the concept right, and other languages /platforms/technologies may follow) will lead or at least make the transition into the future easier.

What the hell is Erlang anyway? Well:

Joe Armstrong had fault tolerance in mind when he designed and implemented the Erlang programming language in 1986, and he was subsequently the chief software architect of the project which produced Erlang/OTP, a development environment for building distributed real-time high-availability systems. More recently Joe wrote Programming Erlang: Software for a Concurrent World. He currently works for Ericsson AB where he uses Erlang to build highly fault-tolerant switching systems.

Erlang is a concurrent functional programming language. Basically there are two models of concurrency:

  • Shared state concurrency
  • Message passing concurrency

Virtually all language use shared state concurrency. This is very difficult and leads to terrible problems when you handle failure and scale up the system.

Erlang uses pure message passing concurrency. Very few languages do this. Making things scalable and fault-tolerant is relatively easy.

Erlang is built on the ideas of:

  • Share nothing. (Process cannot share data in any way. Actually, this is not 100% true; there are some small exceptions.)
  • Pure message passing. (Copy all data you need in the messages, no dangling pointers.)
  • Crash detection and recovery. (Things will crash, so the best thing to do is let them crash and recover afterwards.)

Erlang processes are very lightweight (lighter than threads) and the Erlang system supports hundreds of thousands of processes.

It was designed to build highly fault-tolerant systems. Ericsson has managed to achieve nine 9’s reliability [99.9999999%] using Erlang in a product called the AXD301. [Editor’s Note: According to Philip Wadler, the AXD301 has 1.7 million lines of Erlang, making it the largest functional program ever written.]

While people are talking about 16-, 32-, 64- bits… And limit their “stuff” (whatever it is, threads, objects, RAM, …) accordingly, in Erlang there is no such hard limit.

Erlang processes can grow as big as it wants, provided you give it *enough resources*. Which means, the *same* Erlang program can run on 1 node on a single workstation, or on 1,000 servers spread across different buildings (or continents). The programmer doesn’t care anyway.

How much limited RAM? How much sockets can be open? etc. doesn’t depend on the programmer, and hopefully the programmer won’t need to care about it. Who will care about it is the one who’ll be deploying and running the Erlang program.

Most people still think of programming (and worse, think of Erlang) as procedural languages, then built things on top of it including threading… a threading framework.

Erlang on the other hand is sort of kernel (hence why it’s called a VM, not simply an interpreter but a real VM that manages processes the way a OS manages OS processes). Every function runs on different processes. A process may run in its own Erlang VM node, a different VM node in the server, or on another server. The program doesn’t really care that much (it can care, but doesn’t have to use a “distributed framework” the way other languages do.)

More information about this exciting language:

Update: Some frameworks, in particular Message Queueing systems (e.g. Microsoft’s and Sun Java’s), I think got it right… but on a more complicated, heavyweight level. Erlang/OTP is, under the hood, a message queueing system but much lighter on the CPU… and much lighter on the programmer brain overhead. πŸ˜‰

Update 2: As of now I still don’t know what OTP stands for πŸ˜‰

Like this article? Please share this article with others:

Makan Pelan + Mengunyah itu *Sangat* Penting!

Anda sering atau sedang mengalami masalah kesehatan? Obesitas, diabetes, maag, diare, dan lain-lain?

Cobalah tips yang sangat sederhana ini: mengunyah makanan dengan pelan pada saat makan, jangan tergesa-gesa.

Alasannya? Artikel Irvan Tambunan ini sangat bagus:

Menurunkan berat badan. Menurut penelitian, saat makan dengan pelan, kita mengonsumsi beberapa kalori. Faktanya, cukup untuk mengurangi 10 kilogram dalam setahun tanpa melakukan sesuatu
apapun. Otak memerlukan waktu 20 menit untuk menyadari bahwa kita sudah kenyang. Makan dengan pelan, menyadari bahwa kita sudah kenyang, dan kita dapat berhenti makan tepat waktu. Jadi, untuk mengurangi berat badan, kita harus makan dengan pelan.

Menikmati makanan. Ini merupakan alasan yang sangat masuk akal. Saya teringat dengan kejadian yang dialami teman saya.Karena makan dengan cepat, dia sempat tersedak saat makan. Wah, tentu hal ini sangat menyiksa. Suasana makan pun terasa tidak enak lagi. Dengan makan pelan, kita dapat merasakan nikmatnya rasa makanan yang kita santap.

Pencernaan yang baik. Jika kita makan dengan pelan, kita dapat mengunyah makanan secara sempurna. Saya teringat pelajaran di SD bahwa kita harus mengunyah makanan sebanyak 32 kali sebelum ditelan. Ini dapat membuat pencernaan kita menjadi lancar. Karena sudah dikunyah dengan sempurna di dalam mulut, usus halus tidak perlu bersusah payah menguraikan makanan lagi untuk menyerap sari-sari makanan. Dengan ini, pencernaan menjadi lebih baik.

Mengurangi stres. Saat makan dengan pelan, otomatis konsentrasi hanya tertuju pada makanan. Kita tidak akan berpikir pada masalah yang sedang dihadapai. Dengan demikian, masalah yang dapat membuat kita stres akan hilang sementara. Ada beberapa orang yang percaya bahwa salah satu cara mengurangi stres adalah dengan makan. Menurut saya, hal ini ada benarnya juga. P

Melawan terhadap fast food dan fast life. Gaya hidup instan membuat kita sering untuk memakan makanan cepat saji. Sebut saja restoran fast food yang terkenal seperti McDonald’s. Hal ini membuat hidup kita menjadi tidak sehat, penuh dengan stres, dan tidak menyenangkan. Saya menyarankan kepada anda agar tidak sering mengunjungi restoran tersebut. Sebisa mungkin untuk dihindari. Makanlah di restoran yang sehat, atau lebih baik masaklah makanan anda sendiri. Selain lebih sehat, memasak dapat memberikan kesenangan kepada kita. D

So?

Kalo masih nggak percaya, coba kunjungi Slow Food Manifesto.

Like this article? Please share this article with others: