Evolution of a POE Server

by Rocco Caputo and Socko the Puppet.

Copyright 2002. All rights reserved. This tutorial is free text. It may be distributed (but not modified) under the same terms as POE.

For the Impatient

This tutorial presents the same TCP server at four different levels of abstraction. If you're good with Perl and already know the basics of POE, you should have no trouble figuring out the rest of the tutorial from its sample programs. Here they are in one place:

Act 1

Before we dive into the evolution of POE::Component::Server::TCP and the steps along its way, it's important to have a grasp of certain concepts that POE embodies. In the first act, we'll cover those concepts and look at how POE implements them.

Events and Event Handlers

POE is an event driven framework for networking and multitasking. To understand POE, it's important first to understand events and event driven programming.

In the abstract sense, events are real-world things that happen. The morning alarm has gone off. The toaster has popped. Tea is done. In user interfaces the most common real-world events are mouse movements, button presses, and keystrokes.

Software events are tokens that represent these abstract occurrences. They not only convey external activity into a program, but they also indicate when internal activity happens. A timer has gone off. A socket has established its connection. A download is done.

In event driven programs, a central dispatcher hands out events to functions that react to them. The reactive functions are called event handlers because it's their job to handle events.

POE's event handlers are cooperative. No two handlers may run at once. Even POE's dispatcher is suspended while an event handler is running. Handlers therefore have an opportunity to monopolize a program. They cooperate by returning as quickly as possible, allowing other parts of the program to run with only minimal delays.

Parts of a POE program

The simplest POE program consists of two modules and some custom code: POE::Kernel, POE::Session, and the event handlers that use them.

About POE::Kernel.

POE::Kernel provides event based representations of OS kernel services. These include I/O events, alarms and other timed events, signal events, and several others we won't mention. The event services are configured through different POE::Kernel methods, such as select_read(), delay(), and sig().

POE::Kernel tracks the associations between resources that generate events and the tasks that own them. It can do this because it keeps track of which task is active whenever an event is dispatched. It therefore knows which task has called its methods to allocate each resource. This all happens automatically.

POE::Kernel also knows when tasks should be destroyed. It detects when tasks have no more events to handle, and it knows when all their event generating resources have been released. Such tasks have nothing left to trigger event handlers, and POE::Kernel automatically reaps them.

POE::Kernel stops after the last session stops, since otherwise it would be sitting around doing nothing.

About POE::Session.

POE::Session instances are the tasks that POE::Kernel manages. They are loosely modeled after UNIX processes.

Each session has its own private storage space, called a "heap". Anything stored in one session's heap is not easily accessible by another session.

Each session owns its own resources and handles its own events. Resources only generate events for the sessions that own them, and events are only dispatched to sessions for which they are intended.

For example, multiple sessions can set identical alarms, and each will receive the timed event it requested. All other sessions will remain blissfully unaware of what has happened outside themselves.

About event handlers.

Event handlers are plain Perl functions. What makes them special is the parameters POE::Kernel passes to them when they're called.

POE::Kernel passes parameters the usual way, through @_. The first seven members of this array define the session context in which the event is being delivered. They include a handy reference to the POE::Kernel instance running things, the name of the event itself (in case one function is handling multiple events), a reference to the session's private heap, and a reference to the session that the event was sent from.

The remaining members of @_ are arguments of the event itself. What they contain depends upon the type of event being dispatched. I/O events, for example, include two arguments: the file handle that has pending I/O, and a flag telling whether the activity is input, output, or an exception.

POE does not require programmers to assign all these parameters for every event handler. That would be a lot of silly work, seeing as most of them often go unused. Rather, the POE::Session class exports constants for the offsets into @_ where each parameter lives. This makes it very easy to pluck useful values out of the parameter list while ignoring unnecessary ones. They also allow POE::Session to change the order or number of parameters without breaking programs.

For example, KERNEL, HEAP, and ARG0 are references to the POE::Kernel singleton, the current session's private heap, and the first custom argument for the event. They may be assigned directly out of @_.

my $kernel = $_[KERNEL];
my $heap   = $_[HEAP];
my $thingy = $_[ARG0];

They may be assigned all in one go using an array slice.

my ($kernel, $heap, $thingy) = @_[KERNEL, HEAP, ARG0];

And, of course, $_[KERNEL], $_[HEAP], and $_[ARG0] may be used directly in the event handler. We usually avoid this for custom arguments since "ARG0" means very little by itself.

In all three cases we have pretended that five or more unneeded parameters simply don't exist.

Act 2

Now that you know the concepts behind POE programming, it's time to dig into a working example and see how it's done. This will lay the practical groundwork for the third act.

How to write a POE program

Simple POE programs contain three main parts: A preamble where modules are loaded and things are configured, a main part that instantiates and runs one or more POE::Session objects, and finally the functions that define the event handlers themselves.

Here then is one of the simplest POE programs that does anything. Its full, uninterrupted listing is in listing.evolution.single.

The preamble.

A program's preamble is fairly straightforward. We write a shebang line and load some modules.

#!/usr/bin/perl
use warnings;
use strict;
use POE;

The POE module hides some magic. It does nothing but load other POE modules, including POE::Kernel and POE::Session whether or not you actually ask for them. It gets away with this because those two modules are required by nearly every POE program.

POE::Kernel includes a little magic of its own. When it's first loaded, it creates the singleton POE::Kernel instance that will be used throughout the program.

POE::Session also performs a little magic when it's used. It exports the constants for event handler parameter offsets: KERNEL, HEAP, ARG0, and so on.

So a simple "use POE;" has done quite a lot of initial program setup.

The Session is instantiated and run.

We can start creating sessions once everything is set up. At least one session must be started before POE::Kernel is run, otherwise run() will have nothing to do.

In this example, we start a single task that will handle three events: _start, _stop, and count. The POE::Session constructor associates each event with the function that will handle it.

POE::Session->create(
  inline_states => {
    _start => \&session_start,
    _stop  => \&session_stop,
    count  => \&session_count,
  }
);

The first two events are provided by POE::Kernel. They notify a program when a session has just been created or is just about to be destroyed. The last event is a custom one, implemented entirely within our program. We'll discuss it shortly.

You'll notice that the program doesn't save a reference to the new POE::Session object. That's because Session instances automatically register themselves with POE::Kernel. The Kernel singleton will manage them, so programs rarely need to.

In fact, keeping extra references to POE::Session objects can be harmful. Perl will not destroy and reclaim memory for a session if there are outstanding references to it.

Next we start POE::Kernel. This begins the main loop, which detects and dispatches events. Since we're writing a demo, we also announce when POE::Kernel begins and ceases.

print "Starting POE::Kernel.\n";
POE::Kernel->run();
print "POE::Kernel's run() method returned.\n";
exit;

The Kernel's run() method will not return until every session has stopped. We exit shortly after that since the program has effectively ended.

We have kept the redundant exit as a visual reminder that the program won't run beyond run(). It isn't absolutely necessary since the program will automatically stop when its execution falls off the end of the file.

Event handlers are implemented.

Appropriately enough, we'll begin covering event handlers with _start. The _start handler is called right after the session is instantiated. Sessions use it to bootstrap themselves within their new contexts. They often initialize values in their heaps and allocate some sort of resources to keep themselves alive.

In this case, we set up an accumulator for counting and queue an event to trigger the next handler.

sub session_start {
  print "Session ", $_[SESSION]->ID, " has started.\n";
  $_[HEAP]->{count} = 0;
  $_[KERNEL]->yield("count");
}

Readers familiar with threads will find the yield() method confusing. Rather than suspending a session's execution, it places a new event near the end of the dispatch queue. That event will trigger another event handler after all the events before it take their turn. This may become clearer when we cover multitasking later on.

We can simulate the classic behavior of yield() by returning immediately after calling it, which we have done here.

Next comes the _stop handler. POE::Kernel dispatches it after a session has gone idle and just before it's destroyed for good. It's impossible to stop a session from being destroyed once _stop is reached.

sub session_stop {
  print "Session ", $_[SESSION]->ID, " has stopped.\n";
}

It's not useful to post events from a _stop handler. Part of a session's destruction involves cleaning up any resources still associated with it. That includes events, so any created in a _stop handler will be destroyed before they can be dispatched. This catches a lot of people off-guard.

Finally we have the count event's handler. This function increments the session's accumulator a few times and prints each value. We could have implemented it as a while loop, but we've avoided it for reasons that should become apparent shortly.

sub session_count {
  my ($kernel, $heap) = @_[KERNEL, HEAP];
  my $session_id = $_[SESSION]->ID;
  my $count      = ++$heap->{count};
  print "Session $session_id has counted to $count.\n";
  $kernel->yield("count") if $count < 10;
}

The last line of session_count() posts another count event for as long as the accumulator is less than ten. This perpetuates the session, since each new event triggers session_count() again.

The session stops when yield() is no longer called. POE::Kernel detects that no more event handlers will be triggered, and it destroys the idle session.

Here then is the single counter program's output.

  Session 2 has started.
  Starting POE::Kernel.
  Session 2 has counted to 1.
  Session 2 has counted to 2.
  Session 2 has counted to 3.
  Session 2 has counted to 4.
  Session 2 has counted to 5.
  Session 2 has counted to 6.
  Session 2 has counted to 7.
  Session 2 has counted to 8.
  Session 2 has counted to 9.
  Session 2 has counted to 10.
  Session 2 has stopped.
  POE::Kernel's run() method returned.

And here are some notes about it.

Session IDs begin at 2. POE::Kernel is its own session in many ways, and it is session 1 because it's created first.

Handlers for _start events are called before POE::Kernel->run(). This is a side effect of _start being handled within POE::Session->create().

The first count event, posted by _start's handler, is not handled right away. Rather, it's queued and will be dispatched within POE::Kernel->run().

Sessions stop when they've run out of things to trigger more event handlers. Sessions also stop when they have been destroyed by terminal signals, but we won't see that happen here.

POE::Kernel's run() method returns after the last session has stopped.

Multitasking

The counter we wrote can multitask. Each instance holds its own accumulator in its own HEAP. Each session's events pass through POE::Kernel's queue, and they are dispatched in first-in/first-out order. This forces each session to take turns.

To illustrate this happening, we'll change the previous program to run two sessions at once. The rest of the program will remain the same so we won't show it here. You can read and run it in its entirety from listing.evolution.multiple.

for (1 .. 2) {
  POE::Session->create(
    inline_states => {
      _start => \&session_start,
      _stop  => \&session_stop,
      count  => \&session_count,
    }
  );
}

And here's the modified program's output.

  Session 2 has started.
  Session 3 has started.
  Starting POE::Kernel.
  Session 2 has counted to 1.
  Session 3 has counted to 1.
  Session 2 has counted to 2.
  Session 3 has counted to 2.
  Session 2 has counted to 3.
  Session 3 has counted to 3.
  Session 2 has counted to 4.
  Session 3 has counted to 4.
  Session 2 has counted to 5.
  Session 3 has counted to 5.
  Session 2 has counted to 6.
  Session 3 has counted to 6.
  Session 2 has counted to 7.
  Session 3 has counted to 7.
  Session 2 has counted to 8.
  Session 3 has counted to 8.
  Session 2 has counted to 9.
  Session 3 has counted to 9.
  Session 2 has counted to 10.
  Session 2 has stopped.
  Session 3 has counted to 10.
  Session 3 has stopped.
  POE::Kernel's run() method returned.

So, then, what's going on here?

Each session is maintaining its own count in its own $_[HEAP]. This happens no matter how many instances are created.

POE runs each event handler in turn. Only one handler may run at a time, so most locking and synchronization issues are implicitly taken care of. POE::Kernel itself is suspended while event handlers run, and not even signals may be dispatched until an event handler returns.

Events for every session are passed through a master queue. Events are dispatched from the head of this queue, and new events are placed at its tail. This ensures that sessions take turns.

POE::Kernel's run() method returns after the last session has stopped.

Act 3

Finally we will implement a non-forking echo server with IO::Select and then port it to POE with increasing levels of abstraction. The IO::Select based server is a pretty close fit, actually, since POE itself is a non-forking framework.

A simple select() server.

First we adapt the non-forking server from Perl Cookbook recipe 17.13. The changes that have been made here are for compactness and to ease the translation into POE. We also give the server some minor purpose so that the samples are a little interesting to run.

A runnable version of the server is in listing.evolution.select.

As usual, we start by loading necessary modules and initializing global data structures.

#!/usr/bin/perl
use warnings;
use strict;
use IO::Socket;
use IO::Select;
use Tie::RefHash;
my %inbuffer  = ();
my %outbuffer = ();
my %ready     = ();
tie %ready, "Tie::RefHash";

Next we create the server socket. It's set non-blocking so its operations won't stop this single-process server.

my $server = IO::Socket::INET->new(
  LocalPort => 12345,
  Listen    => 10,
) or die "can't make server socket: $@\n";
$server->blocking(0);

Then comes the main loop. We create an IO::Select object to watch our sockets, and then we use it to detect activity on them. Whenever something interesting happens to a socket, we call a function to process it.

my $select = IO::Select->new($server);
while (1) {

  # Process sockets that are ready for reading.
  foreach my $client ($select->can_read(1)) {
    handle_read($client);
  }

  # Process any complete requests.  Echo the data back to the client,
  # by putting the ready lines into the client's output buffer.
  foreach my $client (keys %ready) {
    foreach my $request (@{$ready{$client}}) {
      print "Got request: $request";
      $outbuffer{$client} .= $request;
    }
    delete $ready{$client};
  }

  # Process sockets that are ready for writing.
  foreach my $client ($select->can_write(1)) {
    handle_write($client);
  }
}
exit;

That concludes the main loop. Next we have functions that process different forms of socket activity.

The first function handles sockets that are ready to be read from. If the ready socket is the main server's, we accept a new connection and register it with the IO::Select object. If it's a client socket with some input for us, we read it, parse it, and enter complete new lines into the %ready structure. The main loop will catch data from %ready and echo it back to the client.

sub handle_read {
  my $client = shift;
  if ($client == $server) {
    my $new_client = $server->accept();
    $new_client->blocking(0);
    $select->add($new_client);
    return;
  }
  my $data = "";
  my $rv = $client->recv($data, POSIX::BUFSIZ, 0);
  unless (defined($rv) and length($data)) {
    handle_error($client);
    return;
  }
  $inbuffer{$client} .= $data;
  while ($inbuffer{$client} =~ s/(.*\n)//) {
    push @{$ready{$client}}, $1;
  }
}

Next we have a function that handles writable sockets. Data waiting to be sent to a client is written to its socket and removed from its output buffer.

sub handle_write {
  my $client = shift;
  return unless exists $outbuffer{$client};
  my $rv = $client->send($outbuffer{$client}, 0);
  unless (defined $rv) {
    warn "I was told I could write, but I can't.\n";
    return;
  }
  if ( $rv == length($outbuffer{$client})
    or $! == POSIX::EWOULDBLOCK) {
    substr($outbuffer{$client}, 0, $rv) = "";
    delete $outbuffer{$client} unless length $outbuffer{$client};
    return;
  }
  handle_error($client);
}

Finally we have a function to handle read or write errors on the client sockets. It cleans up after dead sockets and makes sure they have been closed.

sub handle_error {
  my $client = shift;
  delete $inbuffer{$client};
  delete $outbuffer{$client};
  delete $ready{$client};
  $select->remove($client);
  close $client;
}

And after about 130 lines of program, we have an echo server. Not bad, really, but we can do better.

Mapping our server to POE.

Now we'll translate the IO::Select server into one using POE. We'll use some of POE's lowest-level features. We won't save much effort this way, but the new program will retain a lot of the structure of the last.

Believe it or not, the IO::Select server is already driven by events. It contains a main loop that detects events and dispatches them, and it has a series of functions that handle those events.

To begin with, we'll throw together an empty skeleton of a POE program. Many of the IO::Select program's pieces will be draped on it shortly.

#!/usr/bin/perl
use warnings;
use strict;
use POSIX;
use IO::Socket;
use POE;
POE::Session->create(inline_states => {});
POE::Kernel->run();
exit;

Before we can continue, we need to decide what the significant events are in the program. This will flesh out the program's overall structure.

Once we know what we'll be doing, we can finish off the POE::Session constructor. We create names for the events and define the functions that will handle them. Here's the filled in Session constructor.

POE::Session->create(
  inline_states => {
    _start       => \&server_start,
    event_accept => \&server_accept,
    event_read   => \&client_read,
    event_write  => \&client_write,
    event_error  => \&client_error,
  }
);

Now it's time to start porting the IO::Select code over. We still need to track the input and output buffers for client connections, but we won't use %ready hash here. The structures can remain global because they're keyed on socket handles, and those never collide.

my %inbuffer  = ();
my %outbuffer = ();

Next we bring over large chunks of the IO::Select. Each is triggered by one of the events we've specified, so each will migrate into one of their handlers.

First the remaining initialization code goes into _start. The _start handler creates the server socket and allocates its event generator with select_read(). POE::Kernel's select_read() method takes two parameters: a socket handle to watch and an event to dispatch when the handle is ready for reading.

sub server_start {
  my $server = IO::Socket::INET->new(
    LocalPort => 12345,
    Listen    => 10,
    Reuse     => "yes",
  ) or die "can't make server socket: $@\n";
  $_[KERNEL]->select_read($server, "event_accept");
}

Notice that we don't save the server socket. POE::Kernel keeps track of it for us and will pass it back as an argument to event_accept. We only need a copy of the socket if we want to do something special.

Looking back to the POE::Session constructor, the event_accept event is handled by server_accept(). This handler will accept the new client socket and allocate a watcher for it.

sub server_accept {
  my ($kernel, $server) = @_[KERNEL, ARG0];
  my $new_client = $server->accept();
  $kernel->select_read($new_client, "event_read");
}

Next we handle input from the client in client_read(). It is called when an input event from select_read() is dispatched to the session. That first (0th) argument of that event is the handle that's become ready, so we can read from the socket without keeping a copy of it all the time.

The new client_read() is mostly the same as handle_read() from the IO::Select server. The accept() code has moved to another handler, and we don't bother with %ready anymore.

Errors are passed to event_error's handler via POE::Kernel's yield() method. The yield() method posts events just the way we want them, so it's up to client_read() to pass the client socket itself. The socket is included with event_error as its first argument, $_[ARG0].

Finally, if any output is buffered at the end of this handler, we make sure the client socket is watched for writability. The event_write handler will be called when the client socket can be written to.

sub client_read {
  my ($kernel, $client) = @_[KERNEL, ARG0];
  my $data = "";
  my $rv = $client->recv($data, POSIX::BUFSIZ, 0);
  unless (defined($rv) and length($data)) {
    $kernel->yield(event_error => $client);
    return;
  }
  $inbuffer{$client} .= $data;
  while ($inbuffer{$client} =~ s/(.*\n)//) {
    $outbuffer{$client} .= $1;
  }
  if (exists $outbuffer{$client}) {
    $kernel->select_write($client, "event_write");
  }
}

Next we define what happens when client sockets can be written to. Again, the first argument for this event is the socket that can be worked with.

If the client's output buffer is empty, we stop watching it for writability and return immediately. Otherwise we try to write the entire buffer to the socket. Whatever isn't written remains in the buffer for the next time. If it was all written, though, we destroy the buffer entirely.

The client_write() function handles errors similar to the way client_read() does.

sub client_write {
  my ($kernel, $client) = @_[KERNEL, ARG0];
  unless (exists $outbuffer{$client}) {
    $kernel->select_write($client);
    return;
  }
  my $rv = $client->send($outbuffer{$client}, 0);
  unless (defined $rv) {
    warn "I was told I could write, but I can't.\n";
    return;
  }
  if ( $rv == length($outbuffer{$client})
    or $! == POSIX::EWOULDBLOCK) {
    substr($outbuffer{$client}, 0, $rv) = "";
    delete $outbuffer{$client} unless length $outbuffer{$client};
    return;
  }
  $kernel->yield(event_error => $client);
}

Finally we handle any errors that occurred along the way. We remove the client socket's input and output buffers, turn off all select-like events, and make sure the socket is closed. This effectively destroys the client's connection.

sub client_error {
  my ($kernel, $client) = @_[KERNEL, ARG0];
  delete $inbuffer{$client};
  delete $outbuffer{$client};
  $kernel->select($client);
  close $client;
}

And it's done.

Wheels (part 1)

The POE based server we just completed is still a bunch of work. What's worse, most of the work never changes from one server to the next. Listening to a server socket and accepting connections from it is a well-established science. Likewise, performing buffered operations on non-blocking sockets is largely the same everywhere. Reinventing these wheels for every server gets old very fast.

We created a group of classes under the POE::Wheel namespace to encapsulate these standard algorithms. Each Wheel contains some initialization code to set up event generators, and each implements the handlers for those events.

Wheels' creation and destruction are very important parts of their operation. Upon creation, they plug their handlers into the session that instantiated them. During destruction, those handlers are unplugged and the event generators are shut down. This close binding prevents one session from giving a wheel to another.

POE::Kernel does not manage wheels for you, so it's important that they be kept somewhere safe. They are most commonly stored in their sessions' heaps.

The events generated by wheels are usually at a higher level than the ones they handle internally. For example the "input" events emitted by POE::Wheel::ReadWrite include parsed things, not raw chunks of bytes. This is because ReadWrite parses data as well as just reading or writing it.

POE::Wheel::ListenAccept encapsulates the concept of listening on a server socket and accepting connections from it. It takes three parameters: A server socket to listen on, the name of an event to generate when a connection has been accepted, and the name of an event to generate when an error has occurred.

In this sample, a ListenAccept wheel is created to listen on a previously created server socket. When connections arrive, it emits "event_accepted" with the accepted client socket in ARG0. If any errors occur, it emits "event_error" with some information about the problem. We assume that handlers for these events have been defined and implemented elsewhere.

$_[HEAP]->{server} = POE::Wheel::ListenAccept->new(
  Handle      => $server_socket,
  AcceptEvent => "event_accepted",
  ErrorEvent  => "event_error",
);

POE::Wheel::ReadWrite implements common algorithms necessary to perform buffered I/O on non-blocking sockets. It is a baroque beast, and we'll only discuss a few of the many parameters it accepts.

In this sample, a ReadWrite wheel will work with a previously accepted client socket. It parses input into lines by default, so every "client_input" event represents one line of input. The "client_error" events it emits represent the occasional error.

$_[HEAP]->{client} = POE::Wheel::ReadWrite->new(
  Handle     => $client_socket,
  InputEvent => "client_input",
  ErrorEvent => "client_error",
);

That example is a little misleading. Subsequent ReadWrite wheels would clobber earlier ones, resulting in destroyed connections. The upcoming program will do it right.

Speaking of the example, here it is. Its full listing is in the file listing.evolution.listenaccept.

First we load the modules we'll need. The last line contains some nonstandard magic. Rather than importing symbols into the current package, the parameters to POE.pm are additional modules to load. The POE:: package will be prepended to them, saving a little typing.

#!/usr/bin/perl
use warnings;
use strict;
use POSIX;
use IO::Socket;
use POE qw(Wheel::ListenAccept Wheel::ReadWrite);

If that "use POE" line is too weird, it's perfectly acceptable to replace it with the following four lines.

use POE::Kernel;
use POE::Session;
use POE::Wheel::ListenAccept;
use POE::Wheel::ReadWrite;

Next we have the server's main loop. Again, we start the server session, run everything, and exit when things are done. It's nearly identical to the previous example, but some minor changes have been made in the event and handler names.

POE::Session->create(
  inline_states => {
    _start          => \&server_start,
    server_accepted => \&server_accepted,
    server_error    => \&server_error,
    client_input    => \&client_input,
    client_error    => \&client_error,
  }
);
POE::Kernel->run();
exit;

Now we handle the server's _start event by creating the server socket and starting a ListenAccept wheel to manage it. As before, we don't keep a copy of the server socket, but we do need to hold onto the ListenAccept wheel. Otherwise the wheel would destruct when it falls off the end of the function, and our server would be very short-lived.

sub server_start {
  my $server = IO::Socket::INET->new(
    LocalPort => 12345,
    Listen    => 10,
    Reuse     => "yes",
  ) or die "can't make server socket: $@\n";
  $_[HEAP]->{server} = POE::Wheel::ListenAccept->new(
    Handle      => $server,
    AcceptEvent => "server_accepted",
    ErrorEvent  => "server_error",
  );
}

ListenAccept will emit a "server_accepted" event for every connection it accepts. Each of these events contains a newly accepted client socket in ARG0. The next function, server_accepted(), wraps each socket in a POE::Wheel::ReadWrite instance.

sub server_accepted {
  my $client_socket = $_[ARG0];
  my $wheel         = POE::Wheel::ReadWrite->new(
    Handle     => $client_socket,
    InputEvent => "client_input",
    ErrorEvent => "client_error",
  );
  $_[HEAP]->{client}->{$wheel->ID()} = $wheel;
}

As we alluded to before, server_accepted() takes advantage of every wheel's unique ID to keep them from clobbering each other. Otherwise each new connection would destroy the wheel belonging to the previous one.

Next we handle ReadWrite's input events with client_input(). By default, ReadWrite parses input into lines and emits an input event for each one. Those events include two arguments apiece: the line parsed from the input, and the ID of the wheel that parsed it.

The client_input handler uses the wheel ID to match the input back to its wheel. Once the proper wheel has been established, its put() method is called to buffer the input for writing back to the client. The ReadWrite wheel handles all the buffering and flushing for us.

sub client_input {
  my ($heap, $input, $wheel_id) = @_[HEAP, ARG0, ARG1];
  $heap->{client}->{$wheel_id}->put($input);
}

Finally we handle client and server errors with client_error() and server_error(), respectively. We simply delete the corresponding wheel. This destroys any buffers associated with the wheel, then shuts down the appropriate socket.

sub client_error {
  my ($heap, $wheel_id) = @_[HEAP, ARG3];
  delete $heap->{client}->{$wheel_id};
}

sub server_error {
  delete $_[HEAP]->{server};
}

There are a couple important points to note, though.

If we had kept a copy of any of these sockets, they would not have closed when their wheels were let go. The extra references we held would have kept them active, and we would have been responsible for destroying them ourselves.

If server_error() ever occurs, possibly because we've run out of file handles to create sockets with, the server socket will shut down but existing client connections will continue. In applications where the clients should also shut down, we would just delete $_[HEAP]->{client} as well.

Wheels (part 2)

By using wheels, we've reduced the amount of code needed for a new server by about 45 lines. We can reduce it just a little more by replacing the ListenAccept wheel with POE::Wheel::SocketFactory. The SocketFactory combines the server socket's creation with the act of accepting new connections from it. It also does a lot more, but we won't touch upon that here.

Rather than rehash the entire program, we'll just replace the _start event's handler. The rest of the program is identical, and its full listing is in the file listing.evolution.socketfactory.

sub server_start {
  $_[HEAP]->{server} = POE::Wheel::SocketFactory->new(
    BindPort     => 12345,
    SuccessEvent => "server_accepted",
    FailureEvent => "server_error",
  );
}

That shaves another six lines off the server. We can do much better than that, though.

Components

During the evolution of this simple echo server, we've managed to reduce the server from about 130 lines to about 75. In the process, we've whittled away the main loop and a lot of the code for dealing with sockets. In its place, we've added code that manages POE::Wheel objects instead.

It turns out that managing POE::Wheel objects is only a little less tedious than writing servers longhand. Our servers still must set up SocketFactory instances to listen on sockets, they still must create ReadWrite wheels to interact with clients, and they still must handle errors. Even with wheels, these things happen pretty much the same way for every server, and they're just the coding overhead necessary before sitting down to write the fun stuff.

As with wheels, we've abstracted the repetitive work into something larger. In this case, Ann Barcomb designed a server component to manage the wheels and other things for us. Nearly all of the tedious overhead is gone.

The runnable version of this example is in listing.evolution.component.

As usual, we set up the Perl program by loading the modules we'll need.

#!/usr/bin/perl
use warnings;
use strict;
use POE qw(Component::Server::TCP);

Next we create and run the TCP server component. It will listen on port 12345, and it will handle all the boring tasks of accepting connections, managing wheels, and so forth.

POE::Component::Server::TCP is customizable through callbacks. In its simplest usage, we only need to supply the function to handle input.

POE::Component::Server::TCP->new(
  Port        => 12345,
  ClientInput => \&client_input,
);
POE::Kernel->run();
exit;

Finally we define the input handler.

Every client connection has been given its own POE::Session instance, so each has its own heap to store things in. This simplifies our code because their heaps track things for each connection, not us. Each connection's ReadWrite wheel is already placed into $heap->{client} for us.

sub client_input {
  my ($heap, $input) = @_[HEAP, ARG0];
  $heap->{client}->put($input);
}

And, uh, that's all. Our simple echo server is now under 20 lines, most of which deal with the aspects that make it unique.

Epilogue

We've evolved a 128-line server into 20 lines of mostly unique code. At each step along the way we've been able to focus more on the task at hand instead of on infrastructure necessary to write servers.

Each step mimicked the various stages of POE's development. All the tedious parts still exist, and all the higher-level conveniences are built using them. As a result, if a program needs more control than a high-level class provides, it's straightforward to write something on a lower level that does precisely what is needed.

Code on the low- and high levels continues to multitask and network because all levels boil down to the same common denominator.

The Authors

Rocco Caputo is the original developer and lead programmer for the POE project. He has been designing and writing software since 1978.

Socko is Rocco's mentor and master. He is the founder and leader of a vast sock puppet conspiracy that plans to use POE for something none of them wish to confirm or deny at this time.