venture, dean

Dual Authentication in Pylons with repoze.who

Previously I talked about using custom authentication for a web application, supporting two methods of authentication simultaneously. Browser-based users would be challenged with a typical styled login form, while applications integrating with the service would be challenged with HTTP Basic Authentication. The choice of which authentication to use would be based on how clients are classified. Taking advantage of content negotiation, clients preferring a HTML response would be classified as a "browser" and would be challenged with a login form; any other clients would be classified as an "app" and would be challenged with HTTP Basic Auth.

Implementing this custom authentication and classification scheme for Pylons was quite simple using repoze.who. Here I describe how I implemented it.

I'll start from a skeleton Pylons app.
$ paster create -t pylons CustomAuth

The defaults of 'mako' for templates and no SQLAlchemy are fine for this example.

I based my repoze.who configuration off this recipe from the Pylons cookbook but I'll quickly repeat the necessary steps here, so that the example is complete.

If you haven't done so already, install the repoze.who package with:
$ easy_install repoze.who

The next step is to add repoze.who to the WSGI middleware of your Pylons app. Edit config/middleware.py and add an import:
from repoze.who.config import make_middleware_with_config as make_who_with_config

Then after the comment "CUSTOM MIDDLEWARE HERE" add the following line:
app = make_who_with_config(app, global_conf, app_conf['who.config_file'], app_conf['who.log_file'], app_conf['who.log_level'])

Now edit development.ini and add to the [app:main] section:
who.config_file = %(here)s/who.ini
who.log_level = debug
who.log_file = stdout

Now create a who.ini file in the same location as development.ini containing:
[plugin:form]
use = repoze.who.plugins.form:make_redirecting_plugin
login_form_url = /account/login
login_handler_path = /account/dologin
logout_handler_path = /account/logout
rememberer_name = auth_tkt

[plugin:auth_tkt]
use = repoze.who.plugins.auth_tkt:make_plugin
secret = yoursecret

[plugin:basicauth]
# identification and challenge
use = repoze.who.plugins.basicauth:make_plugin
realm = CustomAuth

[general]
request_classifier = customauth.lib.auth:custom_request_classifier
challenge_decider = repoze.who.classifiers:default_challenge_decider

[identifiers]
plugins =
    form;browser
    auth_tkt;browser
    basicauth

[authenticators]
plugins =
    customauth.lib.auth:UserModelPlugin

[challengers]
plugins =
    form;browser
    basicauth

[mdproviders]
plugins =
    customauth.lib.auth:UserModelPlugin

You would replace "customauth" with the package name of your Pylons app.

Take note of the request_classifier in [general]. It specifies a custom classifier function "custom_request_classifier" located in the lib.auth module of your application. This function is called for each request and returns a classification that, for this application, will be either "browser" or "app" (some other classifications are possible, like "dav", but we're not worrying about them in this application; they'll be treated like "app").

You can see that in the [identifiers] and [challengers] sections there are multiple plugins listed. The choice of plugin to use in each case is based on the value returned by the classifier. If the classifier is "browser" then the "form" challenger will be used, otherwise the "basicauth" challenger is chosen. This is the key to the custom authentication, and as you can see it is all handled by repoze.who and extremely simple to configure.

Create an auth.py file in the lib directory of the Pylons app containing:
from webob import Request

import zope.interface
from repoze.who.classifiers import default_request_classifier
from repoze.who.interfaces import IRequestClassifier

class UserModelPlugin(object):
    
    def authenticate(self, environ, identity):
        """Return username or None.
        """
        try:
            username = identity['login']
            password = identity['password']
        except KeyError:
            return None
        
        if (username,password) == ('foo', 'bar'):
            return username
        else:
            return None
    
    def add_metadata(self, environ, identity):
        username = identity.get('repoze.who.userid')
        if username is not None:
            identity['user'] = dict(
                username = username,
                name = 'Mr Foo',
            )
    

def custom_request_classifier(environ):
    """ Returns one of the classifiers 'app', 'browser' or any
    standard classifiers returned by
    repoze.who.classifiers:default_request_classifier
    """
    classifier = default_request_classifier(environ)
    if classifier == 'browser':
        # Decide if the client is a (user-driven) browser or an application
        request = Request(environ)
        if not request.accept.best_match(['application/xhtml+xml', 'text/html']):
            # In our view, any client who doesn't support HTML/XHTML is an "app",
            #   not a (user-driven) "browser".
            classifier = 'app'
    
    return classifier
zope.interface.directlyProvides(custom_request_classifier, IRequestClassifier)

This is where the custom_request_classifier function is defined. It first calls the default_request_classifier provided by repoze.who, which attempts to classify the request as one of a few basic types: 'dav', 'xmlpost', or 'browser'. If the default classification results in 'browser' then we try to classify it further based on content negotiation. If the client prefers a HTML or XHTML response then we leave the classification as 'browser', otherwise we classify it as 'app'.

The other part of the auth module is the UserModelPlugin class. This class provides "authenticator" and "mdprovider" plugins. The job of the authenticate method is to authenticate the request, typically by verifying the username and password provided, but of course that depends on the type of authentication used. In this example, we simply provide a stub authenticator that compares authentication details against a hard-coded username/password pair. In a real app you would authenticate against data in a database or LDAP service, or whatever you decided to use.

The add_metadata method of UserModelPlugin is called to supply metadata about the authenticated user. In this example we simply supply a hard-coded name, but in a real app you would fetch details from a database or LDAP or whatever.

The final bit of code needed is the login form. Create an account controller:
$ paster controller account

Then edit controllers/account.py and add a login method to AccountController:
    def login(self):
        identity = request.environ.get('repoze.who.identity')
        if identity is not None:
            came_from = request.params.get('came_from', None)
            if came_from:
                redirect_to(str(came_from))
        
        return render('/login.mako')

Also add a test method to the same controller so that we can verify authentication works:
    def test(self):
        identity = request.environ.get('repoze.who.identity')
        if identity is None:
            # Force skip the StatusCodeRedirect middleware; it was stripping
            #   the WWW-Authenticate header from the 401 response
            request.environ['pylons.status_code_redirect'] = True
            # Return a 401 (Unauthorized) response and signal the repoze.who
            #   basicauth plugin to set the WWW-Authenticate header.
            abort(401, 'You are not authenticated')
        
        return """
<body>
Hello %(name)s, you are logged in as %(username)s.
<a href="/account/logout">logout</a>
</body>
</html>
""" %identity['user']


The test action checks whether a user has been authenticated for the current request. If not, it forces a 401 response which will have a different effect depending on which classification was chosen. If the request was classified as "browser" then, due to the repoze.who config specifying "form" as the challenger plugin for this classification, the repoze.who middleware will intercept the 401 response and replace it with a 302 redirect to the login form page. For any other classification, the "basicauth" challenger will be chosen which will return the 401 response with an appropriate "WWW-Authenticate" header.

Note that we needed to suppress the StatusCodeRedirect middleware for the 401 response to prevent Pylons from returning a custom error document and messing with our 401 error.

In a real application you may want to move the identity check into the __before__ method of the controller (or BaseController class) or into a custom decorator. Or you could use repoze.what.

In the templates directory create login.mako containing a simple form such as:
<html>
<body>
  <p>
    <form action="/account/dologin" method="POST">
      Username: <input type="text" name="login" value="" />
      <br />
      Password: <input type="password" name="password" value ="" />
      <br />
      <input type="submit" value="Login" />
    </form>
  </p>
</body>
</html>


Now you should be ready to run the application and test authentication.
$ paster serve --reload development.ini


Using your favourite web browser, go to http://127.0.0.1:5000/account/test

You should immediately be redirected to /account/login (with a came_from parameter) with your login form displayed. Enter bogus details and you shouldn't make it pass the form. Now enter the hard-coded login details ("foo", "bar") and you should be authenticated and see the text from /account/test.

Now we can test whether basic auth works. Using curl, try to fetch /account/test
$ curl -i http://127.0.0.1:5000/account/test
HTTP/1.0 302 Found
Server: PasteWSGIServer/0.5 Python/2.5.1
Date: Tue, 03 Mar 2009 08:57:59 GMT
Location: /account/login?came_from=http%3A%2F%2F127.0.0.1%3A5000%2Faccount%2Ftest
content-type: text/html
Connection: close

<html>
  <head><title>Found</title></head>
  <body>
    <h1>Found</h1>
    <p>The resource was found at <a href="/account/login?came_from=http%3A%2F%2F127.0.0.1%3A5000%2Faccount%2Ftest">/account/login?came_from=http%3A%2F%2F127.0.0.1%3A5000%2Faccount%2Ftest</a>;
you should be redirected automatically.
/account/login?came_from=http%3A%2F%2F127.0.0.1%3A5000%2Faccount%2Ftest
<!--  --></p>
    <hr noshade>
    <div align="right">WSGI Server</div>
  </body>
</html>

You can see that, by default, the request is classified as 'browser' and so a 302 redirect to the login form was returned. Note that if no Accept header field is present, then it is assumed that the client accepts all media types, which is why the request was classified as "browser".

Now let's specify a preference for 'application/json' (using the Accept header) and see what we get.
$ curl -i -H "Accept:application/json" http://127.0.0.1:5000/account/test
HTTP/1.0 401 Unauthorized
Server: PasteWSGIServer/0.5 Python/2.5.1
Date: Tue, 03 Mar 2009 09:21:09 GMT
WWW-Authenticate: Basic realm="CustomAuth"
content-type: text/plain; charset=utf8
Connection: close

401 Unauthorized
This server could not verify that you are authorized to
access the document you requested.  Either you supplied the
wrong credentials (e.g., bad password), or your browser
does not understand how to supply the credentials required.

Perfect. We get a 401 response with a WWW-Authenticate header specifying "Basic" authentication is required. (Note that ideally we should return a JSON response body as that is what the client requested.)

Now we can repeat the request, including our authentication details.
$ curl -i -H "Accept:application/json" -u foo:bar http://127.0.0.1:5000/account/test
HTTP/1.0 200 OK
Server: PasteWSGIServer/0.5 Python/2.5.1
Date: Tue, 03 Mar 2009 11:39:43 GMT
Content-Type: text/html; charset=utf-8
Pragma: no-cache
Cache-Control: no-cache
Content-Length: 107

<html>
<body>
Hello Mr Foo, you are logged in as foo.
<a href="/account/logout">logout</a>
</body>
</html>


And there we have it. Dual authentication on the same controller.
venture, dean

RESTful HTTP with Dual Authentication

For a recent web service project I wanted to make it as RESTful as possible. It needed to provide both a user interface (for interactive users) as well as exposing an API for programmatic integration. So I implemented both, but the two are not separate. Every applicable resource is exposed under only one URI each, usable by both interactive users (with web browsers) and by applications.

The "magic" of HTTP content negotiation is what makes this work. Clients that prefer HTML will get a rich HTML UI to interact with the application and data. Clients that prefer JSON will get back a JSON representation of the resource and, similarly, those that prefer XML will get back an XML representation. So most URIs provide 3 representations of themselves: HTML, JSON and XML.

When web browsers make a HTTP request they send an "Accept" header indicating their preference for HTML, so interactive users get the rich HTML UI, all styled and pretty looking. However, they are still viewing exactly the same resource as those fetching the JSON or XML representation, just that it is pleasing to the eye and is surrounded by navigation and other UI niceties.

All this should be pretty familiar to those who already play with RESTful HTTP. The part of the implementation that may not be familiar is how I handled authentication.

To keep with the typical "web experience" for interactive users, I wanted to provide the conventional login form/cookies method of authentication. This method is all but useless for applications, so I wanted to provide HTTP Basic Auth for them.

Now, given that a resource lives on a single URI, how do we support both types of authentication at once? Or perhaps the question is: should we? I decided the answer was "yes", as I didn't want to force interactive users to have to use HTTP Auth (login forms are intrusive and unstyled [1]; most users aren't used to them; and, perhaps worse of all, you can't logout with most browsers without plugins or hackary [2]).

So how did I support two forms of authentication simultaneously without separating web UI URIs from "API" URIs? I relied on our old friend, content negotiation. I decided that: any client who negotiates to receive a HTML representation is classified as a "browser" and will be challenged for authentication with a login form (redirected to the login page) and remembered with cookies. Any other client will be classified as an "app" and will be challenged with HTTP Basic Auth (with a 401 response).

I tossed this idea around for a while, deciding if it was too much hackary, but decided to implement it and see how it faired in practice. My conclusion is that it does the job well, allowing a resource to not only provide multiple representations of itself, but to allow the authentication method to be chosen that best fits the client.

I share this because I am interested in comments from the RESTful community as to how others tackle this kind of problem. Is this a suitable use of content negotiation or am I pushing the whole RESTful ideology too far?

Is it better practice to separate the "UI" from the "API", in effect exposing a resource in two places (doesn't sound very RESTful to me)? Is it better practice to enforce only one type of authentication, making users accept the awkward way that browsers handle HTTP Auth?

On a final (implementation-related) note, I built the application in question using Pylons and for the custom authentication I used repoze.who which ended up being the perfect tool for the job. repoze.who is very pluggable and so with minimal code I was able to configure it to handle authentication in exactly the way I wanted. If I get a chance later I'll write about how I configured repoze.who with Pylons to handle dual authentication.

[1] When will the W3C improve HTTP authentication so that it can be optionally styled, doing away with the need for custom form/cookie auth for most web sites?

[2] When will browser makers add a simple logout option for HTTP Auth?
venture, dean

Mirrored swap with zfs on OpenSolaris

I recently installed OpenSolaris 2008.11 on my development server (highly recommended, btw). Out-of-the-box it installs with zfs root filesystems (a relatively new feature in the Solaris/OpenSolaris world) which makes it much easier to do many administrative tasks, such as taking filesystem snapshots, performing safe upgrades (upgrades are performed on a snapshot/clone of the live root, which can then be booted from; fallback to previous root is the easy backout method); and mirror the root filesystem onto a second disk.

After installing a second disk, mirroring the root filesystem was as easy as a zpool attach command (after partitioning & labelling the disk for Solaris use).

The install didn't, however, configure a swap partition on top of zfs. Just a plain old standard swap slice. Very boring!

Pre-Solaris 10 days I would configure mirrored swap (and root) using Disksuite. In these modern times I wanted to see how difficult it would be to setup a mirrored swap on top of zfs. Not too difficult at all, it turns out. This is how to do it.

Choose a slice that exists on both disks with the same size. In my case, the OpenSolaris install had configured a 2GB slice to use for swap. I disabled swap on that slice with:
$ pfexec swap -d /dev/dsk/c3d1s1

Then create a new mirrored zfs pool across the two disks (if you only have one disk, just create a standard zpool on the one slice):
$ pfexec zpool create -m legacy -f swap mirror c3d0s1 c3d1s1

Specify "-m legacy" to prevent zpool from creating and mounting a zfs filesystem at /swap automatically. We don't want to use this zfs pool for normal filesystems, and "legacy" tells zfs to leave it alone.

Next, create a zfs volume that can be accessed as a block device (like "/dev/{dsk,rdsk}/path"). This type of zfs volume is called a "zvol" and comes with block devices at "/dev/zvol/{dsk,rdsk}/path". It seems that zvols must be created with a fixed size (probably reasonable, given the confusion that growing and shrinking such devices could cause) so we use "-V" to specify the size of the volume. The only gotcha is that the size must be a multiple of the volume block size, so I chose the largest multiple of 512KB below the size of the slice (1.95GB in my case):
$ pfexec zfs create -V 1945600k swap/swap0

We can verify that worked by checking for a block device:
$ ls -l /dev/zvol/dsk/swap/swap0 
lrwxrwxrwx   1 root     root          35 Feb 13 18:48 /dev/zvol/dsk/swap/swap0 -> ../../../../devices/pseudo/zfs@0:2c

Finally, tell Solaris to start using it for swap and we are done:
$ pfexec swap -a /dev/zvol/dsk/swap/swap0
$ swap -l
swapfile                  dev    swaplo   blocks     free
/dev/zvol/dsk/swap/swap0 182,2         8  3891192  3891192

Lastly, check the status of the zfs pool, make sure it is healthy (usually worth doing this sooner!):
$ zpool status swap
  pool: swap
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        swap        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c3d0s1  ONLINE       0     0     0
            c3d1s1  ONLINE       0     0     0

errors: No known data errors

Update: One last step (that I forgot in the original write-up) is to make the swap setting persistent. This is done with an entry in /etc/vfstab:
/dev/zvol/dsk/swap/swap0        -               -               swap    -       no      -

Make sure to test it with a reboot.
venture, dean

Quickly create SMF service manifests for Solaris using Manifold

I find SMF (in Solaris and OpenSolaris) to be the best thing to happen to service management since someone decided that runlevels and symlinks were a handy way to control services at startup & shutdown. No more init.d scripts ... win.

An arguable drawback with SMF is that you have to define your service configuration with an XML file, called a service manifest. I think it would be fair to say that most people do what I used to do: copy an existing manifest and change the relevant bits. A simple but practical method, admittedly, but I decided it could be improved upon.

For that reason I recently put together a little tool called Manifold. It is a simple command-line tool, written in Python, that creates the SMF manifest for you after asking you some questions about the service.

The best way to explain what it does is with a demonstration. Here I will use Manifold to create an SMF manifest for memcached, showing how to validate the result and create the service with it.

Using manifold to create an SMF manifest for memcached is easy. Give it an output filename, then it will prompt for all the answers it needs to create the manifest.
$ manifold memcached.xml

The service category (example: 'site' or '/application/database') [site] 

The name of the service, which follows the service category
   (example: 'myapp') [] memcached

The version of the service manifest (example: '1') [1] 

The human readable name of the service
   (example: 'My service.') [] Memcached

Can this service run multiple instances (yes/no) [no] ? yes

Enter value for instance_name (example: default) [default] 

Full path to a config file; leave blank if no config file
  required (example: '/etc/myservice.conf') [] 

The full command to start the service; may contain
  '%{config_file}' to substitute the configuration file
   (example: '/usr/bin/myservice %{config_file}') [] /opt/memcached/bin/memcached -d

The full command to stop the service; may specify ':kill' to let
  SMF kill the service processes automatically
   (example: '/usr/bin/myservice_ctl stop' or ':kill' to let SMF kill
  the service processes automatically) [:kill] 

Choose a process management model:
  'wait'      : long-running process that runs in the foreground (default)
  'contract'  : long-running process that daemonizes or forks itself
                (i.e. start command returns immediately)
  'transient' : short-lived process, performs an action and ends quickly
   [wait] contract

Does this service depend on the network being ready (yes/no) [yes] ? 

Should the service be enabled by default (yes/no) [no] ? 

The user to change to when executing the
  start/stop/refresh methods (example: 'webservd') [] webservd

The group to change to when executing the
  start/stop/refresh methods (example: 'webservd') [] webservd

Manifest written to memcached.xml
You can validate the XML file with "svccfg validate memcached.xml"
And create the SMF service with "svccfg import memcached.xml"


View the resulting manifest:
$ cat memcached.xml 
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
        Created by Manifold
--><service_bundle type="manifest" name="memcached">

    <service name="site/memcached" type="service" version="1">

        
        
        

        <dependency name="network" grouping="require_all" restart_on="error" type="service">
            <service_fmri value="svc:/milestone/network:default"/>
        </dependency>


        <instance name="default" enabled="false">
            

            <method_context>
                <method_credential user="webservd" group="webservd"/>
            </method_context>

            <exec_method type="method" name="start" exec="/opt/memcached/bin/memcached -d" timeout_seconds="60"/>

            <exec_method type="method" name="stop" exec=":kill" timeout_seconds="60"/>

            <property_group name="startd" type="framework">
                
                
                <propval name="duration" type="astring" value="contract"/>
                <propval name="ignore_error" type="astring" value="core,signal"/>
            </property_group>

            <property_group name="application" type="application">
                
            </property_group>

        </instance>
        
        
        
        <stability value="Evolving"/>

        <template>
            <common_name>
                <loctext xml:lang="C">
                    Memcached
                </loctext>
            </common_name>
        </template>

    </service>

</service_bundle>


Now validate the manifest and use it to create the SMF service:
$ svccfg validate memcached.xml
$ sudo svccfg import memcached.xml 
$ svcs memcached
STATE          STIME    FMRI
disabled        9:52:18 svc:/site/memcached:default


The service can be started and controlled using svcadm:
$ sudo svcadm enable memcached
$ svcs memcached
STATE          STIME    FMRI
online          9:52:53 svc:/site/memcached:default
$ ps auxw | grep memcached
webservd 16098  0.0  0.1 2528 1248 ?        S 09:52:53  0:00 /opt/memcached/bin/memcached -d


Find more information at the Manifold project page or download Manifold from pypi.
venture, dean

Examining & killing request threads in Pylons

I've been working with Pylons quite a lot lately and have been very impressed. Today I discovered a handy tool for debugging Pylons (and any WSGI/Paster served apps) that provides a web interface to enumerate, poke at, and even kill currently active request threads.

It is called "egg:Paste#watch_threads" (if you can call that a name) and obviously it is a feature of Paster (so if you serve your Pylons app via mod_wsgi, for example, you wouldn't be able to use it; not that it is recommended to enable it in a production environment given the information/power it exposes).

Enabling it for a Pylons app is simply a matter of modifying the config file (development.ini). It took me a bit of scanning of the Paster docs to work out how to get the config correct, so I'll share the simple magic here.

You need to replace this part of the Pylons config (e.g. development.ini):
[app:main]
use = egg:Myapp
full_stack = true

with this:
[composite:main]
use = egg:Paste#urlmap
/ = myapp
/.tracker = watch_threads

[app:watch_threads]
use = egg:Paste#watch_threads
allow_kill = true

[app:myapp]
use = egg:Myapp
full_stack = true
#... rest of app config ...


What we are doing is replacing the main app with a composite app. The composite app uses "egg:Paste#urlmap" to mount the Pylons app at "/" while also mounting the "watch_threads" app at "/.tracker" (use whatever path you like; I borrowed from the examples I found).

So now if you fire up the Pylons application it should behave like normal, but you should also be able to browse to "/.tracker" (e.g. http://127.0.0.1:5000/.tracker) to see the active request thread debugger.

Below is a screenshot demonstrating watch_threads examining a Pylons app I was working on. Two threads are active; the request/WSGI environment is being shown for one of them.

venture, dean

Python 3.0 on Mac OS X with readline

Python 3.0 is out now and even though an OS X package isn't available yet, it is easy to build from source on a Mac. However, without some tweaking, you usually end up with a Python interpreter that lacks line-editing capabilities. You know, using cursor keys to edit the command-line and access history. The problem is that Apple doesn't provide a readline library (due to licensing issues they offer a functionally similar but different library called editline) so by default Python builds without readline support and hence no editing/history support. This always frustrates me.

Luckily, this is easily fixed so keep reading.

You can tell when readline isn't going to be included by examining the end of the make output. You will see something like this:
Failed to find the necessary bits to build these modules:
_gdbm              ossaudiodev        readline        
spwd                                                  
To find the necessary bits, look in setup.py in detect_modules() for the module's name.

The steps below detail my method for adding readline (and gdbm which you can skip if you don't want it) support to Python 3.0 (this probably works with other Python versions too).

Firstly, install the readline and gdbm libraries. One of the easiest ways to do that is to use MacPorts (aka DarwinPorts). If you don't have it already you can download the MacPorts installer to set things up. Once that is done then open Terminal/iTerm and enter:
$ sudo port install readline
$ sudo port install gdbm

If that works, then you are ready to build Python. Get the Python 3.0 source code and unpack it. You need to tell setup.py where to find the libraries you installed. MacPorts (usually) installs all of the software it manages in /opt/local/ so in setup.py find the two lines:
add_dir_to_list(self.compiler.library_dirs, '/usr/local/lib')
add_dir_to_list(self.compiler.include_dirs, '/usr/local/include')

and add two similar lines before them that point to /opt/local/lib and /opt/local/include, like:
add_dir_to_list(self.compiler.library_dirs, '/opt/local/lib')
add_dir_to_list(self.compiler.include_dirs, '/opt/local/include')
add_dir_to_list(self.compiler.library_dirs, '/usr/local/lib')
add_dir_to_list(self.compiler.include_dirs, '/usr/local/include')

Now you can configure and build Python.
$ ./configure --enable-framework MACOSX_DEPLOYMENT_TARGET=10.5 --with-universal-archs=all
$ make
$ make test
$ sudo make frameworkinstall

Note that if you've got any other non-Apple distributed versions of Python installed and want to keep the default version as it was, use (for example, to revert default back to 2.5):
$ cd /Library/Frameworks/Python.framework/Versions/
$ sudo rm Current && sudo ln -s 2.5 Current

Finally, so that the command "python3.0" works from the command-line, you need to either add /Library/Frameworks/Python.framework/Versions/3.0/bin/ to your PATH; or symlink /Library/Frameworks/Python.framework/Versions/3.0/bin/python3.0 to a standard directory in your PATH, like /usr/bin or /usr/local/bin . On my box, I install custom stuff into /usr/local/ and so I added these symlinks:
$ sudo ln -s /Library/Frameworks/Python.framework/Versions/3.0/bin/python3.0 /usr/local/bin/
$ sudo ln -s /Library/Frameworks/Python.framework/Versions/3.0/bin/2to3 /usr/local/bin/
venture, dean

Building ffmpeg on Solaris 10

Building some software projects on Solaris can often be challenging, usually when the project has mainly Linux-centric developers. I've had plenty of experience coercing such software to build on Solaris and today I'll provide a recipe for building ffmpeg on Solaris 10.

This recipe describes building ffmpeg from SVN trunk which was at revision 15797 at the time of writing. I mention this because ffmpeg is a surprisingly agile moving target. There are no actual releases, everyone must work from SVN and the developers are certainly not shy from making major incompatible changes between SVN revisions. Sometimes the changes effect the build process (configure options, etc) and sometimes they effect the actual ffmpeg args. So what I describe here may not work next week, but it should at least provide a good starting point.

Solaris supports a number of POSIX standards (see standards(5)) and so it is important to make sure that PATH is set correctly so that the correct commands are used. This does effect the build process. The PATH below is recommended, and includes /usr/ucb in the right place. Solaris is fun eh.

The recommended PATH is:
  $ export PATH=/usr/xpg6/bin:/usr/xpg4/bin:/usr/ccs/bin:/usr/ucb:/usr/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/sfw/bin:/opt/sfw/bin


GNU make is required. Solaris ships with GNU make and calls it gmake, which usually works fine. However, this version of ffmpeg creates a Makefile that causes gmake (3.80) to crash with an error "gmake: *** virtual memory exhausted. Stop.". I had to install GNU make 3.81 and use that instead. I installed it in /opt/make-3.81/bin/ and added it to the front of the PATH:
  $ export PATH=/opt/make-3.81/bin:$PATH


SVN checkout a copy of the latest ffmpeg source (I used r15797 for this)
 $ svn co svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg-svn-trunk
 $ cd ffmpeg-svn-trunk


Write the following diff to a file "solaris_10_patch.diff":
Index: libavcodec/eval.c
===================================================================
--- libavcodec/eval.c   (revision 15797)
+++ libavcodec/eval.c   (working copy)
@@ -36,7 +36,8 @@
 #include <string.h>
 #include <math.h>
 
-#ifndef NAN
+#if !defined(NAN) || defined(__sun__)
+  #undef NAN
   #define NAN 0.0/0.0
 #endif
 
Index: libavcodec/avcodec.h
===================================================================
--- libavcodec/avcodec.h        (revision 15797)
+++ libavcodec/avcodec.h        (working copy)
@@ -3015,4 +3015,9 @@
 #define AVERROR_NOENT       AVERROR(ENOENT)  /**< No such file or directory. */
 #define AVERROR_PATCHWELCOME    -MKTAG('P','A','W','E') /**< Not yet implemented in FFmpeg. Patches welcome. */
 
+#ifdef __sun__
+#undef isnan
+#define isnan(x)        __extension__( { __typeof(x) __x_n = (x); __builtin_isunordered(__x_n, __x_n); })
+#endif
+
 #endif /* AVCODEC_AVCODEC_H */


Patch the ffmpeg source with it:
 $ patch -p0 < solaris_10_patch.diff


Run configure with the options I've specified below (use whatever prefix you like). You'll notice I have had to disable some features/protocols which were causing build difficulties (and I didn't need them). Also notice you have to explicitly specify "bash".
  $ bash ./configure --prefix=/opt/ffmpeg-SVN-r15797 --extra-cflags="-fPIC" --disable-mmx --disable-protocol=udp --disable-encoder=nellymoser


Then you should be ready to build and install (as root most likely).
  $ make
  # make install


Hope that helps.
venture, dean

Eddie 0.37.2 released

Eddie 0.37.2 has been released. The big change is that Eddie is now a properly installable Python package. This allows it to be distributed in package format and can be very easily installed using "easy_install EDDIE-Tool". Other bugfixes and minor improvements are also included. CHANGELOG.

Download Eddie

If you haven't heard of it before, Eddie is a multi-platform monitoring tool developed in Python.
venture, dean

(no subject)

When designing the FLVio RESTful HTTP API I ended up choosing XHTML as the data representation format. My natural instinct was to use XML and invent my own schema, but RESTful Web Services convinced me otherwise.

While explaining to a customer today about simply using a web browser to help debug the API I said,

"It is no coincidence that we use XHTML to represent data as it is not only a well-understood XML format but also makes life much easier when debugging."

Which has proven itself true so far. Any browser becomes a debugging tool for the API. Although, until browsers support all the HTTP verbs (or XHTML5 / Web Forms 2.0) you'll need an addon like Poster for Firefox to test commands like PUT and DELETE.
venture, dean

Zoner - DNS management UI

A couple of years ago, while learning TurboGears, I wrote a web application to simplify management of DNS zone files. Fast forward to today and I finally found a few minutes to clean it up a bit and make a release.

It is called Zoner and differs from many DNS management interfaces in that it works directly with live zone files. The zone files remain the master copy of domain details and can still be edited manually without effecting Zoner, as opposed to storing the domain structure in a database and generating zone files when needed (or reconfiguring bind to read directly from SQL). It also stores an audit trail for all changes (made through Zoner) and zones can be rolled back to any previous version.

Zoner might also be a useful reference app for anyone learning TurboGears 1.0. It is relatively simple, uses SQLAlchemy and Kid with Paginate and Form widgets.