tag:blog.mclemon.org,2013:/posts Sean McLemon 2021-09-13T21:07:30Z Sean McLemon tag:blog.mclemon.org,2013:Post/1564352 2020-11-01T22:12:27Z 2021-09-13T21:07:30Z Missing Version Numbers

Back in June it was announced that the next Perl release would be called Perl 7. A quick look back at Perl's history reveals that this is a little unusual. Perl 5 was originally released in October 17, 1994 and served its users well for a number of years. In 2000 Perl 6 was announced as a ground-up redesign of the language and it spent a long time in development. It eventually surfaced in 2015 but in 2019 was renamed "Raku" to distinguish it from traditional Perl. This leaves Perl 6 as a sort of "orphaned" version number.

In the meantime while Raku née Perl 6 was being developed, the team behind Perl 5 had delivery numerous point releases and retained an active community of loyal users. Many of these had no desire to migrate to Raku and wanted a number of quality of life improvements in their language, ecosystem and libraries. The sort of changes being proposed were bigger than you'd expect from a point release, so a new major version was required. Hence Perl 7 - the real next version of Perl.

So we now have quite an interesting situation, let's take a look at the release history:

Version Date Comment
Perl 4.000 21st March 1991
Perl 5.000 17th October 1994
Perl 6 - Actually "Raku", little resemblance to Perl 5
Perl 7 Planned 2021+ Refined and improved Perl 5

Over the years many commercial and open source projects have skipped version numbers leaving their own little version holes with their own various explanations. Today we'll take a look at a number of similar situations like this, and we'll try to understand what happened in each case and why.

A fairly typical Perl program which calculates π

PHP 6

Perl was not the first language to skip a version, in fact it wasn't even the first scripting language beginning with "P" to skip version 6. A quick glance at PHP's release history reveals the following:

Version Date Comment
PHP 4.0.0 22 May 2000
PHP 5.0.0 13 July 2004
PHP 6.0.0 - Never released
PHP 7.0.0 3 Dec 2015

So what's the story here? Well in common with most software written in the early '00s PHP did not have particularly good Unicode support. The developers behind PHP recognised this and planned a rewrite - PHP 6 - that used Unicode to represent strings internally, specifically UTF-16. Work began in 2005 and would continue for a few years. While some of the planned PHP 6 features were completed on schedule, the Unicode rewrite stalled and became a major blocker to the release. In the meantime Unicode did indeed begin to take off ...

... however UTF-16 didn't. UTF-8, a different standard, rapidly emerged as the preferred Unicode standard on the web. This was particularly problematic since it became apparent that PHP 6 had some serious performance issues dealing with converting strings to and from UTF-8, and since web applications were PHP's bread and butter this was a big deal. In 2010, five years into the PHP 6's development the project was abandoned. Much of the non-Unicode features of PHP6 were ported into PHP 5 and released in version 5.4.

In 2014 as a new major release of PHP was being planned the topic of what it should be called was raised and it was decided that for the sake of removing any confusion or ambiguity the next version of PHP would be PHP 7. There are plenty of justifications given, but the first one given in the RFC I linked really seals it

First and foremost, PHP 6 already existed and it was something completely different. The decimal system (or more accurately the infinite supply of numbers we have) makes it easy for us to skip a version, with plenty more left for future versions to come.

This makes plenty of sense - PHP 6 was already a thing, and it's not like integers are a finite resource. If this was Ubuntu and they were considering discarding a good adjective/animal pair then perhaps it'd be slightly different

ECMAScript 4

Brendan Eich developed Javascript over the course of 10 short days at Netscape, and it soon became the defacto scripting language for the web. This status was confirmed by Ecma who issued a number of standards and referred to it as "ECMAScript". Nowadays they release these standards on a yearly basis (the latest being ECMAScript 2020, published June 2020) but at the end of 2009 we had five ... kinda:


Version Date Comment
ECMAScript 1 June 1997
ECMAScript 2 June 1998
ECMAScript 3 December 1999
ECMAScript 4 -
Abandoned
ECMAScript 5 December 2009

The first two largely captured and formalised the existing Javascript behaviour, but the committee began to get a little more adventurous in the third version, introducing features like regular expressions and try/catch. I can't find a very good summary other than the Wikipedia one (which seems to be copy-pasted everywhere) and I'm not going to read the entire spec.

Anyway the folks at Ecma recognised the potential of rich client-side web applications, and recognised that in its current state Javascript aka ECMAScript was not particularly well suited this task in its current state:

Though flexible and formally powerful, the abstraction facilities of ES3 are often inadequate in practice for the development of large software systems. ECMAScript programs are becoming larger and more complex with the adoption of Ajax programming on the web and the extensive use of ECMAScript as an extension and scripting language in applications. The development of large programs can benefit substantially from facilities like static type checking, name hiding, early binding and other optimization hooks, and direct support for object-oriented programming, all of which are absent from ES3.
So they intended to address this and more by creating ECMAScript 4, which was supposed to completely overhaul Javascript. Object-Oriented Programming was to be embraced (with classes and interfaces being introduced) and a rich type system was to back it it up (with type annotations available to assist with correctness and provide the runtime with hints to help with compile and runtime optimisations). Numeric values would no longer just be limited to floating point - developers would now gain access to byte, int, uint, double, and even decimal numeric types. They'd even be able to define Algebraic Data Types which, at the time, was only really available in niche languages like Haskell, SML, Ocaml and has only recently started to break into the mainstream. The ability to structure applications using namespaces and packages would make building larger applications composed of many separate modules more manageable. So while previous specifications either captured the status quo or made small incremental enhancements, ES4 was to be a complete redesign of the language with sweeping changes on a scale the committee hadn't yet encountered.

This concerned a number of people - most notably it drew the ire of Microsoft's IE team, who were concerned that many of the changes were not backwards-compatible. Chris Wilson, platform architect of the IE team at the time said this:
For ECMAScript, we here on the IE team certainly believe that thoughtful evolution is the right way to go; as I've frequently spoken about publicly, compatibility with the current web ecosystem - not "breaking the Web" - is something we take very seriously. In our opinion, a revolution in ECMAScript would be best done with an entirely new language, so we could continue supporting existing users as well as freeing the new language from constraints (including the constraint of performantly supporting scripts written in the old language)
Brendan Eich did not agree, and wrote a blog post titled "Open letter to Chris Wilson" which accused Wilson of being disingenuous, and Microsoft of suddenly proposing ECMAScript 3.1 without regard for the ES4 effort:
You seem to be repeating falsehoods in blogs since the Proposed ECMAScript 4th Edition Language Overview was published, claiming dissenters including Microsoft were ignored by me, or “shouted down” by the majority, in the ECMAScript standardization group
...
These ES3.1 proposals came suddenly in March after years of passive participation by Microsoft, while the rest of the group worked on ES4. They have not been touched since April.
It was clear that there would be no agreement. Microsoft effectively held all the cards - IE was at that time the dominant browser by some distance. If Ecma somehow pushed forward with ES4, it's entirely possible that Microsoft could've just ignored it altogether. ES4 was abandoned, and years later ES5 was proposed with slightly more modest, incremental changes. This is a strategy that has continued to this day - where the language evolves but remains backwards compatible. Some ES4 features (such as classes) ended up ECMAScript anyway, and in a curious twist Microsoft created Typescript which takes even more ES4 features like interfaces, union types and type annotations.

IPv5

IPv4 and IPv6 are pretty well known - but you never really hear of any other versions. IP specification took shape towards the end of the late 1970s. As engineers struggled to figure out the exact problem they were solving and the way it should be solved they quickly issued a number of iterations and ultimately produced what we now know as IPv4. In the 1990s it became apparent that the number of devices accessing the Internet would ultimately exceed the number of available addresses IPv4 could provide (a bit less than 4.3 billion when you factor in reserved blocks) - so a new version of the protocol was drafted called IPv6. There were of course other changes and motivations in IPv6, but literally the first bullet point in the IPV6 spec deals with increasing this address space - bumping it up to 128 bits (good for 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses).

Trying to produce a coherent history is a little difficult due to how both TCP and IP were reshaped and redefined during the early period. Version headers in packets are resized and rebased from one version to the next. Thankfully we have RFC 755 which puts its foot down and authoritatively states the following timeline:
    Version Date Comment
    IPv0 March 1977
    IPv1
    February 1978
    IPv2
    February 1978
    IPv3 February 1978

    IPv4 June 1978
    IPv5 - Abandoned?
    IPv6 December 1995

    Note: For IPv6 we could linked RFC 1883 (where it was first specified) or RFC 8200 (which made a few additional changes to the protocol)

    So the only real missing version is IPv5 - which actually spent a couple of decades being defined as the "Internet Stream Protocol", never really went anywhere and is effectively abandoned. Honestly though if you're interested you should check out this article by Matthäus Wander. I spent ages bouncing around different RFCs and IENs and couldn't really piece everything together until Matthäus laid it all out so go read his article.

    IPv6 adoption is slow and steady, currently sitting at 33% worldwide as of 16th October 2020

    Node.js 1.0.0 to 3.0.0

    Node.js came into existence in 2009, promising developers that they could implement performant backend code in a simple language like Javascript thanks in no small part to Google's V8 engine. I personally played with it a bit every now and then and used things like Ghost, but no more than every couple of months. Then one day I tried to run some example code only to find it didn't work and was 5 major versions out of date. I was pretty confused and tried to figure out if there was some reason I was so far behind. A quick look through Node.js' release history reveals the reason behind my confusion:

    Version Date Comment
    Node.js 0.10.0 11 March 2013
    Node.js 0.11.0 28 March 2013
    Node.js 0.12.0 6 February 2015
    Node.js 1.0.0 - Never existed
    Node.js 2.0.0 - Never existed
    Node.js 3.0.0 - Never existed
    Node.js 4.0.0 8 September 2015
    Node.js 5.0.0 29 October 2015

    So between February and September in 2015 Node.js jumped from version 0.12 to 4.0, and then a month later to 5.0. So what happened here? Well apparently a number of members of the Node.js community were dissatisfied with Joyent's stewardship of the project, forked it under the name "io.js" and released a handful of major versions - 1.0, 2.0 and 3.0.

    Ultimately Joyent, io.js and the community reached some sort of agreement, merged their codebases, and released the new version as Node.js 4.0. So if we amend our timeline with to include the io.js releases ...

    Version Date Comment
    Node.js 0.10.0 11 March 2013
    Node.js 0.11.0 28 March 2013
    io.js 1.0.0 14 January 2015
    Node.js 0.12.0 6 February 2015
    io.js 2.0.0 4 May 2015
    io.js 3.0.0 4 August 2015
    Node.js 4.0.0 8 September 2015
    Node.js 5.0.0 29 October 2015

    .. everything looks a little bit clearer.

    Java 2.x to 4.x

    I have only ever been casual observer of Java - I used it a little at University for my Data Structures & Algorithms modules but rarely touched it otherwise. Whenever I needed to think about Java versions it was because it was used by a program I needed to use, and I never quite followed its versioning. I just know that I never really knew what I was using. It was basically "Java" on versions 1.0 and 1.1, then "Java 2 Standard Edition" for a while after ... but retained the "1.x" major version - so versions 1.2, 1.3 and 1.4. Then at some point we were dealing with Java/J2SE 5.0 aka J2SE 1.5.

    Version Date Comment
    JDK 1.0 January 1996
    JDK 1.1 February 1997
    J2SE 1.2 December 1998
    J2SE 1.3 May 2000
    J2SE 1.4 February 2002
    J2SE 2.0 - Never existed
    J2SE 3.0 - Never existed
    J2SE 4.0 - Never existed
    J2SE 5.0 February 2004
    Java SE 6.0 December 2006

    Sun talk a little bit about the version number here but really here's all you need to know:
    Both version numbers "1.5.0" and "5.0" are used to identify this release of the Java 2 Platform Standard Edition. Version "5.0" is the product version, while "1.5.0" is the developer version. The number "5.0" is used to better reflect the level of maturity, stability, scalability and security of the J2SE.

    The number "5.0" was arrived at by dropping the leading "1." from "1.5.0". Where you might have expected to see 1.5.0, it is now 5.0 (and where it was 1.5, it is now 5).

    This was some years before the concept of semantic versioning came about so I suspect this is mostly just a kind of marketing effort - they realised they'd stuck with 1.x for a while and couldn't really come up with a good path to a 2.x and had kind-of already burned that  by using "J2SE" so decided to hit the reset button with 5.0.

    DirectX 4

    Anyone who played PC games within the last 25 years will have had some interaction with DirectX - the collective name for Microsoft's suite of graphics, sound and networking APIs meant for gaming.

    Version Date Comment
    DirectX 1 30 September 1996
    DirectX 2
    5 June 1996
    DirectX 3 15 September 1996
    DirectX 4 - Never released
    DirectX 5 4 August 1997
    DirectX 6 7 August 1998

    After Microsoft released DirectX 3 they launched two successor projects concurrently - one was a smaller, incremental release (DirectX 4) and the other was a larger release with a swathe of new functionality (DirectX 5). Apparently after Microsoft consulted with the community they discovered that there was little interest for the features planned for DirectX 4, and a lot of enthusiasm for DirectX 5. Microsoft simply ceased development on DirectX 4 and focused on delivering DirectX 5. Fun fact: around this time John Carmack called DirectX's D3D "a horribly broken API"

    X-Wing vs TIE Fighter, one of the games to use DirectX 5

    Watcom 5.0 and earlier

    Back in the 1980s Open Source development tools were not yet widespread, so you could buy compilers from a few different companies. One such company was Watcom, which was formed by Unversity of Waterloo students and had a fair amount of success with a Fortran compiler to which they later added a C frontend. Their compilers initially targeted IBM's mainframe architectures (Series 1, System/370 etc) but as 1980s bore on the the IBM PC platform started to take off. In 1988 they launched their first C compiler for IBM platforms - Watcom 6.0


    Version Date Comment
    Watcom 5.0 - Never existed
    Watcom 6.0 1988
    Watcom 7.0 1989
    Watcom 8.0 1990
     
    At the time the dominant compiler products were Borland and Microsoft C compilers - both of which were on version 5. In an effort to be seen as competitive (or perhaps to be recognised as better than) their competitors, they launched it with version 6.0 - literally one-upping their more established rivals. Watcom's C compiler was for a brief moment hot property - famed for being one of the better optimising compilers available for the PC it was widely used in the gaming industry, notably by the newly formed id software who would go on to use it to build Doom and Doom 2 (more on these at filfre.net). There are also references to Watcom in the Quake source code, but I don't know if that means anything.

    I couldn't find a screenshot of anything related to Watcom 6.0 but here's the Watcom 9.0 debugger "WVIDEO"

    Slackware 5 and 6

    For some reason Slackware was the first ever Linux distro I used. I have no idea why I chose it, but looking back it would've been better if I had a more user-friendly one like RedHat or SuSE. Using software on Slackware often involved compiling it yourself, which usually meant hunting down, downloading and compiling dependencies (possibly even encountering more dependencies which require you to hunt down, download and compile sub-dependencies (possibly even encountering more dependencies ... etc)). Between May and October in 1999 they jumped from using 4.0 to 7.0


    Version Date Comment
    Slackware 1.0 17 July 1993
    Slackware 2.0 2 July 1994
    Slackware 3.0 30 November 1995
    Slackware 4.0 17 May 1999
    Slackware 5.0 - Never existed
    Slackware 6.0 - Never existed
    Slackware 7.0 25 October 1999


    The reason given for skipping these versions is explained in the FAQ which is still up:
    I think it's clear that some other distributions inflated their version numbers for marketing purposes, and I've had to field (way too many times) the question "why isn't yours 6.x" or worse "when will you upgrade to Linux 6.0" which really drives home the effectiveness of this simple trick. With the move to glibc and nearly everyone else using 6.x now, it made sense to go to at least 6.0, just to make it clear to people who don't know anything about Linux that Slackware's libraries, compilers, and other stuff are not 3 major versions behind. I thought they'd all be using 7.0 by now, but no matter. We're at least "one better", right? :)
    Let's take a look at the latest version numbers of some of the "other distributions" at the time of Slackware 7.0's release - October 1999:
    • Mandrake 6.1 (released September 1999)
    • RedHat 6.1 (released September 1999)
    • SuSE 6.2 (released August 1999)
    • Debian 2.1 (released March 1999)
    I'm sure you'll see a pretty clear difference - Mandrake, RedHat and SuSE were all commercial linux distributions in competition with each other. So it's understandable that they wouldn't want to seem out of alignment or behind . As for Debian, it was always more of a hackers distro - it was and still is very much Free Software at it's core (which you'll know if you downloaded the default ISO and tried to get your wifi working...) so presumably they felt no pressure to take part in this.

    So going back to Slackware - this always puzzled me. In my opinion Slackware was more similar to Debian than to the commercial distros. I imagine users would've been perfectly content to move on from Slackware 4 to Slackware 5 without worrying about what that version number implied. I don't think it's controversial to suggest that Debian has the last laugh. Mandrake became Mandriva then largely disappeared. RedHat the company has pivoted to where it's distro is not its main focus. SuSE still exists but is relatively niche compared to Debian and its derivative distributions.

    Slackware 4 running KDE 1. Sidenote: that World of Spectrum page looks amazing on what looks like a very old Netscape Navigator

    SuSE, Mandrake and Ubuntu oddities


    There are three other major Linux distributions which seemingly have missing versions - SuSE, Mandrake and Ubuntu. They all have initial releases which are not 0.x or 1.x and which have simple explanations so I've lumped them all together here.

    Version Date Comment
    SuSE Linux 4.0 - Never existed
    SuSE Linux 4.1 - Never existed
    SuSE Linux 4.2 May 1996
    SuSE Linux 4.3 September 1996

    The first release of SuSE Linux was version 4.2. This is a reference to Hitchhiker's Guide to the Galaxy where a computer called Deep Thought calculates the answer to "the Ultimate Question of Life, The Universe, and Everything" as 42. Incidentally the first version of its package manager YaST is 0.42

    Version Date Comment
    Mandrake 4.0 - Never existed
    Mandrake 5.0 - Never existed
    Mandrake 5.1 July 1998
    Mandrake 5.2 December 1998
    Mandrake 5.3 February 1999

    Mandrake Linux's first ever release in July 1998 was version 5.1. It was based on RedHat Linux 5.1 so it's likely they just borrowed that, perhaps they intended to stay aligned with RH's releases and wanted to show they were compatible.

    Version Date Comment
    Ubuntu 2.xx - Never existed
    Ubuntu 3.xx - Never existed
    Ubuntu 4.10 October 2004
    Ubuntu 5.04 April 2005
    Ubuntu 5.10 October 2005

    The final Linux distro I'll talk about is dubious to include as there's not really any missing or skipped version numbers really. Ubuntu's first ever release was 4.10 - so it would seem that they skipped versions 1, 2 and 3. However Ubuntu's version numbers are based on the year and month of release, so version 4.10 implies a release date of October 2004, 5.04 is April 2005 and so on.

    Conclusion

    I only scratched the surface of this subject, really. If look around then you'll start to see missing versions everywhere. For example, what happened to Windows 9? How about .NET [Core] 4, MySQL 6 and 7 or Akumulátor 2? As soon as you apply an sequential version to some software you make a statement about how that software relates to any previous or future releases - whether or not you adhere to a strict set of rules like semver. This is meant to help users and it largely works as intended but does not really give any strong prescription for how to handle abandoned releases or forked-then-remerged projects. As a consequence we have to be cool with the idea that skipping a version or two is perhaps preferable than introducing confusion or even discarding any notion of sequence whatsoever - imagine wondering if it's safe to upgrade from PHP e1daa19 to PHP caf3c64.

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1602559 2020-10-10T12:52:32Z 2020-10-15T19:02:05Z Jupyter Notebook - problems with dotnet-try under IIS

    I keep encountering things when setting up Jupyter! After I got it running under IIS I went to bed then picked it up the next evening, tried to install the C# and F# kernels from dotnet try. Of course I hit two snags:

    Issue 1. C# and F# Kernels aren't available at all

    I installed it as usual by opening up Windows Terminal, source-ing my venv, installing dotnet-try globally and installing the dotnet try kernel. I suspected I was gonna encounter something a little odd - I wasn't sure if the dotnet try would respect the venv when I installed jupyter. Turns out the kernel got installed in a weird location (under AppData/Roaming rather than in my venv dir). I did a bit of experimenting and this was how I ended up resolving the issue. First I updated the PATH so that it included both the venv directory and the dotnet tools directory:

        <aspNetCore processPath="python.exe" 
                    arguments="launch.py"
                    stdoutLogEnabled="true" 
                    stdoutLogFile="logs\stdout">
          <environmentVariables>
            <environmentVariable name="PYTHONPATH" value="." />
            <environmentVariable name="PATH" value="%PATH%;C:\Users\sean\.dotnet\tools;.\venv\Scripts" />
          </environmentVariables>    
        </aspNetCore>

    Then I opened the terminal and ran the kernel install steps once more:

    Once that was in place I could create a new notebook and run F# code:

    Issue 2. Notebooks weren't saving

    I was getting a little "Method Not Allowed" notification when I made some changes to my notebooks - so things like saving, renaming and deleting notebooks didn't work. The issue was that IIS was preventing HTTP verbs other than GET and POST (so PUT, PATCH and DELETE were getting blocked). The solution for me was to remove WebDAV handler and module - which again involved adding a couple of lines to my Web.config. It's maybe easier to just show the final Web.config to see everything together:

    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <system.webServer>
        <handlers>
          <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
          <remove name="WebDAV" />
        </handlers>
    
        <modules>
          <remove name="WebDAVModule" />
        </modules>
        
        <aspNetCore processPath=".\venv\Scripts\python.exe" 
                    arguments="launch.py"
                    stdoutLogEnabled="true" 
                    stdoutLogFile="logs\stdout">
          <environmentVariables>
            <environmentVariable name="PYTHONPATH" value="." />
            <environmentVariable name="PATH" value="%PATH%;C:\Users\sean\.dotnet\tools;.\venv\Scripts" />
          </environmentVariables>    
        </aspNetCore>
      </system.webServer>
    </configuration>
    So yet another hack - I'm starting to wonder if I'm using it in an unintended or unexpected way. I've put it all up on a git repo for anyone who wants to quickly get it up and running without thinking too hard.


    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1600161 2020-10-03T20:04:48Z 2020-10-04T00:30:33Z Jupyter Notebook - Running under IIS on Windows

    I use Jupyter Notebooks quite a lot for personal development, and to do this I set up an instance on a DigitalOcean droplet. However I also wanted to do something similar for some tasks at work which can access resources only available in our internal network. Since we use Windows that means I need to do something a little different. I could just use Anaconda to install and load it, but I find it a little clunky, I don't like how it tries to manage everything for you, I wanted a slightly nicer url than https://localhost:8000 and more importantly I wanted it to just be there without having to start anything up. So I set out to get Jupyter Notebook running on IIS using the ASP.NET Core Module V2 on http://notebook.local.

    Make sure Python, IIS and the ASP.NET Core Hosting Bundle are installed. Open up a Terminal (I'm using bash, Powershell is probably file but I don't like it), create an new directory where our notebook application will be served, setup a venv there, and run the activation script, upgrade pip so that it doesn't complain at us and then finally install Jupyter Notebook:

    $ cd ~/source/repos
    $ mkdir notebook
    $ python -m venv venv
    $ source venv/Scripts/activate
    $ pip install --upgrade pip
    $ pip install notebook
    To start with let's just run it on its own to check that it works:

    $ jupyter notebook
    [I 20:39:42.322 NotebookApp] The port 8888 is already in use, trying another port.
    [I 20:39:42.328 NotebookApp] Serving notebooks from local directory: C:\Users\sean\source\external
    [I 20:39:42.329 NotebookApp] Jupyter Notebook 6.1.4 is running at:
    [I 20:39:42.329 NotebookApp] http://localhost:8889/?token=e27aa8d7754e1bb078dfdf48e7c55032ac3551360389fd65
    [I 20:39:42.329 NotebookApp]  or http://127.0.0.1:8889/?token=e27aa8d7754e1bb078dfdf48e7c55032ac3551360389fd65
    [I 20:39:42.329 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
    [C 20:39:42.476 NotebookApp]
    
        To access the notebook, open this file in a browser:
            file:///C:/Users/sean/AppData/Roaming/jupyter/runtime/nbserver-260-open.html
        Or copy and paste one of these URLs:
            http://localhost:8889/?token=e27aa8d7754e1bb078dfdf48e7c55032ac3551360389fd65
         or http://127.0.0.1:8889/?token=e27aa8d7754e1bb078dfdf48e7c55032ac3551360389fd65

    So here you can see why I'm doing all this - http://localhost:8888 isn't quite as nice as https://notebook.local. Next let's create a logs folder where our stdout logging will live and notebooks folder where we'll keep our notebook files.

    $ mkdir logs notebooks
    

    Next we'll create a Python script called launch.py that will launch Jupyter:

    import os, sys
    from IPython.terminal.ipapp import launch_new_instance
    from IPython.lib import passwd
    import warnings
    
    # set the password you want to use
    notebook_password = "test"
    
    sys.argv.append("notebook")
    sys.argv.append("--IPKernelApp.pylab='inline'")
    sys.argv.append("--NotebookApp.ip='*'")
    sys.argv.append("--NotebookApp.port=" + os.environ["ASPNETCORE_PORT"])
    sys.argv.append("--NotebookApp.open_browser=False")
    sys.argv.append("--NotebookApp.notebook_dir=./notebooks")
    sys.argv.append("--NotebookApp.password=" + passwd(notebook_password))
    
    launch_new_instance()

    This script is kind of interesting, but we'll go into it later once we've got everything up and running. Next we will need to create a Web.config file that'll be used by IIS and ANCMV2 to call this script with the Python executable and libraries from our venv. So let's create the following:

    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <system.webServer>
        <handlers>
          <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
        </handlers>
        <aspNetCore processPath=".\venv\Scripts\python.exe" 
                    arguments="launch.py"
                    stdoutLogEnabled="true" 
                    stdoutLogFile="logs\stdout">
          <environmentVariables>
            <environmentVariable name="PYTHONPATH" value="." />
          </environmentVariables>    
        </aspNetCore>
      </system.webServer>
    </configuration>

    Next we'll set up the new Site in IIS - so open up IIS Manager (hit Start and start typing "Internet Information Services" - it'll appear in the suggestions) then:

    1. in the left-hand menu right click the top-level node, hit "Add Website..."
    2. in the dialog that appears call the site whatever you like
    3. in the "Physical Path" section click the "..." button, and choose the directory your notebooks live in
    4. in the "Binding" section type "notebook.local" in the "Host name" section

    Now nearly everything's in place we just need to modify our local DNS to resolve "notebook.local" to 127.0.0.1 so add the following line to C:/Windows/System32/drivers/etc/hosts (you'll need to be an Administrator to do so):

    127.0.0.1    notebook.local

    So now we have a Website bound to "notebook.local" running under IIS, which is configured to use ASP.NET Core Runtime Module V2 to launch a Python script that will run Jupyter Notebook. Let's fire up a browser and navigate to http://notebook.local and enter the password test:

    It loads nicely so let's just create a simple Python 3 notebook to confirm everything works end-to-end:

    Excellent. Ok so let's rewind a little bit and ask the question: why do we use a special script instead of having our Web.config file launch jupyter notebook directly? Well that would be ideal, but a bit of a hack required. Notice in launch.py that we grab the ASPNETCORE_PORT environment variable and use that as our port? Well that contains the port IIS wants to communicate with our app in - so we need to hook Jupyter up to listen on that port. While it is possible to call jupyter notebook --port 8080 to specify the port we want to use, it's actually not possible to use the ASPNETCORE_PORT variable in our Web.config. These environment variables are not expanded in our arguments="..." attribute, (see issue #117 in the aspnet/AspNetCoreModule repo on GitHub) - so we need to run a script that grabs this from the Environment and sets it up. It's a bit hacky, but it gets us where we need to be.

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1570426 2020-07-07T19:29:23Z 2020-07-07T19:29:23Z setxkbmap - prevent layout amnesia I use both the US* and the Czech keyboard layouts on my home desktop, but I use dwm which is quite minimalist and doesn't come with any way to switch between these. Nor should it - dwm is a window manager for hackers, so I bodged my own solution that displays the currently selected layout in dmenu and added the following section to my ~/.xprofile that let's me cycle between them using right-Windows key:
    # enable US and CZ layouts, toggle with right-win, other possible grp:* values are...
    #    grp:alt_caps_toggle         Alt+Caps Lock
    #    grp:alt_space_toggle        Alt+Space
    #    grp:menu_toggle             Menu
    #    grp:lwin_toggle             Left Win
    #    grp:win_space_toggle        Win+Space
    #    grp:rwin_toggle             Right Win
    setxkbmap -layout us,cz                 
    setxkbmap -option                       
    setxkbmap -option "grp:rwin_toggle"
    

    However I noticed that periodically this would stop working, I'd be stuck on the US layout and my right-windows key would do nothing. Re-running the three setxkbmap commands would fix things up. 

    This eventually annoyed me and I resolved to figure it out. Apparently Linux has some power-management module that will cause some USB devices to disconnect, and when they reconnect they appear as a different device without these keyboard settings.

    Unsuprisingly my bodge was responsible for this. The correct solution here was to set my XKBLAYOUT to "uz,cz" and to add "grp:rwin_toggle" to my XKBOPTIONS in /etc/default/keyboard:

    $ cat /etc/default/keyboard 
    # KEYBOARD CONFIGURATION FILE
    
    # Consult the keyboard(5) manual page.
    
    XKBMODEL="pc105"
    XKBLAYOUT="us,cz"
    XKBVARIANT=""
    XKBOPTIONS="lv3:ralt_switch,terminate:ctrl_alt_bksp,grp:rwin_toggle"
    
    BACKSPACE="guess"
    

    Now my layout switching works a little better and my keyboard no longer has bouts of amnesia. That BACKSPACE="guess" line sure does look odd though, I'll need to come back to that at some point.

    * = I have no need for the "pound" symbol anymore and US-layout keyboards are much more available

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1545960 2020-05-17T21:01:50Z 2020-05-17T21:16:43Z No, THIS is What Peak Hello World Looks Like

    An article titled "This Is What Peak Hello World Looks Like" did the rounds on Hacker News (discussion here), where someone took the plain old printf("hello, world!")in C and transformed it step-by-step beyond recognition. This was a noble effort, but today I discovered an easier way to make Hello World more insane in a slightly different way:

        $ dotnet new console -o hello -lang F#
        The template "Console Application" was created successfully.
    
        Processing post-creation actions...
        Running 'dotnet restore' on hello/hello.fsproj...
          Restore completed in 209.4 ms for /home/sean/dev/dotnet/hello/hello.fsproj.
    
        Restore succeeded.
        
        $ cd hello
        $ cat Program.fs 
        // Learn more about F# at http://fsharp.org
    
        open System
        
        [<EntryPoint>]
        let main argv =
            printfn "Hello World from F#!"
            0 // return an integer exit code
    
     

    That's it, an absolute monster of a program. But wait, "This is just a plain old Hello World in .NET! What's so insane about that?" I hear you ask. Well we're not quite done yet, let's get ready to share the executable with our colleague who wants to use our app so we'll add <PublishSingleFile>true</PublishSingleFile> to hello.fsproj - which will generate a single self-contained executable binary - and publish it:

        $ dotnet publish -r linux-x64 -c Release
        Microsoft (R) Build Engine version 16.5.0+d4cbfca49 for .NET Core
        Copyright (C) Microsoft Corporation. All rights reserved.
    
          Restore completed in 5.27 sec for /home/sean/dev/dotnet/hello/hello.fsproj.
          hello -> /home/sean/dev/dotnet/hello/bin/Release/netcoreapp3.1/linux-x64/hello.dll
          hello -> /home/sean/dev/dotnet/hello/bin/Release/netcoreapp3.1/linux-x64/publish/
        $ ./bin/Release/netcoreapp3.1/linux-x64/publish/hello
        Hello World from F#!
        $ ls -lh ./bin/Release/netcoreapp3.1/linux-x64/publish/hello
    -rwxr-xr-x 1 sean sean 78M May 17 22:47 ./bin/Release/netcoreapp3.1/linux-x64/publish/hello
    

    There she blows! An absolute monster - a 78MB Hello World executable! This is actually a bit unfair, since it includes some stuff we might not need, so let's add <PublishTrimmed>true</PublishTrimmed> to hello.fsproj and rebuild:

        $ dotnet publish -r linux-x64 -c Release
        Microsoft (R) Build Engine version 16.5.0+d4cbfca49 for .NET Core
        Copyright (C) Microsoft Corporation. All rights reserved.
        
          Restore completed in 33.8 ms for /home/sean/dev/dotnet/hello/hello.fsproj.
          hello -> /home/sean/dev/dotnet/hello/bin/Release/netcoreapp3.1/linux-x64/hello.dll
          Optimizing assemblies for size, which may change the behavior of the app. Be sure to test after publishing. See: https://aka.ms/dotnet-illink
          hello -> /home/sean/dev/dotnet/hello/bin/Release/netcoreapp3.1/linux-x64/publish/
        $ ./bin/Release/netcoreapp3.1/linux-x64/publish/hello
        Hello World from F#!
        $ ls -lh ./bin/Release/netcoreapp3.1/linux-x64/publish/hello
    -rwxr-xr-x 1 sean sean 46M May 17 22:51 ./bin/Release/netcoreapp3.1/linux-x64/publish/hello
    

    Ok, slightly better but that's still a hefty 46MB for Hello World. This is a bit disingenuous however - what's happening is that I've configured the project to be able to produce a self-contained executable, to do this the .NET Core runtime is bundled in the hello binary. So it's a little bit like distributing a little bit of application [byte]code, a bytecode interpreter, a JIT compiler and runtime libraries.

































    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1507691 2020-02-10T19:47:54Z 2020-09-23T00:52:46Z Jupyter - problems with dotnet-try

    Since .NET Try supports F# officially I wanted to switch over my Jupyter notebook instance to use that instead of IFsharp. But I ran into a couple of frustrating issues that I wanted to document in case anyone else hit them and didn't know how to debug the issue.

    Issue 1. dotnet-try installs but isn't usable

    I followed Scott Hanselman's instructions to install dotnet-try, but when I tried to execute dotnet try jupyter install it seemed as though dotnet-try wasn't installed at all:

    $ dotnet tool install dotnet-try --global
    You can invoke the tool using the following command: dotnet-try
    Tool 'dotnet-try' (version '1.0.19553.4') was successfully installed.

    $ dotnet try jupyter install Could not execute because the specified command or file was not found. Possible reasons for this include: * You misspelled a built-in dotnet command. * You intended to execute a .NET Core program, but dotnet-try does not exist. * You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.

    After a lot of head scratching I dug into the actual docs and learned that both .NET Core 3.0 and .NET Core 2.1 SDKs should be installed, while I only had .NET Core 3.1's SDK So after a quick sudo apt install dotnet-sdk-3.0 dotnet-sdk-2.1 I was successfully able to install the kernel and list it:

    $ jupyter kernelspec list
    Available kernels:
      .net-csharp    /home/notebook/.local/share/jupyter/kernels/.net-csharp
      .net-fsharp    /home/notebook/.local/share/jupyter/kernels/.net-fsharp
      mit-scheme     /usr/local/share/jupyter/kernels/mit-scheme
      python3        /usr/local/share/jupyter/kernels/python3

    Issue 2. Jupyter can't run dotnet-try

    However even though it was installed, each time I tried to create a new F# notebook Jupyter would give an error saying that it was unable to connect to the kernel. After taking a quick look at my logs I saw the same error as before!

    Feb 09 13:19:11 aviemore jupyter[837]: [I 13:19:11.169 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports
    Feb 09 13:19:11 aviemore jupyter[837]: Could not execute because the specified command or file was not found.
    Feb 09 13:19:11 aviemore jupyter[837]: Possible reasons for this include:
    Feb 09 13:19:11 aviemore jupyter[837]: * You misspelled a built-in dotnet command.
    Feb 09 13:19:11 aviemore jupyter[837]: * You intended to execute a .NET Core program, but dotnet-try does not exist.
    Feb 09 13:19:11 aviemore jupyter[837]: * You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
    Feb 09 13:19:14 aviemore jupyter[837]: [I 13:19:14.180 NotebookApp] KernelRestarter: restarting kernel (2/5), new random ports
    Feb 09 13:19:14 aviemore jupyter[837]: Could not execute because the specified command or file was not found.
    Feb 09 13:19:14 aviemore jupyter[837]: Possible reasons for this include:
    Feb 09 13:19:14 aviemore jupyter[837]: * You misspelled a built-in dotnet command.
    Feb 09 13:19:14 aviemore jupyter[837]: * You intended to execute a .NET Core program, but dotnet-try does not exist.
    Feb 09 13:19:14 aviemore jupyter[837]: * You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH. 
    
    I restarted the service, the server, checked and it took a while to realise the root cause. What happened was, even though dotnet-try was in PATH when I switched over to my jupyter use, it wasn't the case when I ran it via systemd. It seems that /etc/profile - which runs all the scripts in /etc/profile.d, one of which adds the ~/.dotnet/tools dir to PATH - is not used when systemd starts the service. I don't know the correct way to address this, but I wrote a quick script to setup PATH correctly and added an EnvironmentFile to the [Service] section my notebook.service file to use it:
    EnvironmentFile=/home/jupyter/config/notebook-config
    After I set this up and restarted notebook.service it was able to access dotnet-try and spin up the F# kernel correctly.
    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1386425 2019-03-22T11:00:03Z 2019-03-22T11:00:03Z Fun with manpages

    When reading about a Unix command or C library function it's relatively common to see it suffixed with a number in brackets. This is to make it clear what exactly you're talking about, so if someone is discussing mknod they might write mknod(1) they're talking about the shell command or mknod(2) if they mean the syscall. The number used refers to the manpage section, so to see the manpage for the shell function:

    $ man 1 mknod

    And to see what the syscall is all about:

    $ man 2 mknod

    According the man manpage on my system there are eight standard manpage sections in total, with one non-standard section:

    1. Executable programs or shell commands
    2. System calls (functions provided by the kernel)
    3. Library calls (functions within program libraries)
    4. Special files (usually found in /dev)
    5. File formats and conventions eg /etc/passwd
    6. Games
    7. Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7)
    8. System administration commands (usually only for root)
    9. Kernel routines [Non standard]
    Since something can be present in more than one section, I wondered which symbol had the most manpages so I wrote a script to look through each of the directories in /usr/share/man/man[1-8], list and parse the gzipped filenames (they're usually named symbol.sectionnumber.gz) and then find out the sections they're all present in:
    import os
    import re
    from collections import defaultdict
    
    manpage_gz_pattern = "(.*)\.\w+.gz"
    manpage_dir_base = "/usr/share/man"
    manpage_sections = range(1, 9)
    manpage_entries = defaultdict(list)
    
    for manpage_section in manpage_sections:
        manpage_section_dir = os.path.join(manpage_dir_base, f"man{str(manpage_section)}")
        manpage_section_contents = os.listdir(manpage_section_dir)
    
        for manpage_entry_filename in manpage_section_contents:
            gz_entry = re.match(manpage_gz_pattern, manpage_entry_filename)
            manpage_entry = gz_entry.groups()[0] 
            manpage_entries[manpage_entry] += [(manpage_section, manpage_entry_filename)]
    
    for section_count in manpage_sections:
        number_of_manpages = len([ m for m in manpage_entries if len(manpage_entries[m]) == section_count])
        print(f"number of manpages in {section_count} sections: {number_of_manpages}")
    

    The results are:
    $ python -i mancount.py
    number of manpages in 1 sections: 10763
    number of manpages in 2 sections: 107
    number of manpages in 3 sections: 7
    number of manpages in 4 sections: 0
    number of manpages in 5 sections: 0
    number of manpages in 6 sections: 0
    number of manpages in 7 sections: 0
    number of manpages in 8 sections: 1
    There's seemingly a clear winner, you can find a single symbol in all eight standard manpage sections. However this is a little misleading because after a bit of inspection this symbol is "intro" - it is not a shell command, syscall, stdlib function, game or anything like that - it's a manpage that describes a bit about each section.

    So ignoring intro the most common symbols and their manpage entries are
    • mdoc (mdoc.1.gz, mdoc.5.gz, mdoc.7.gz)
    • locale (locale.1.gz, locale.5.gz, locale.7.gz)
    • hostname (hostname.1.gz, hostname.5.gz, hostname.7.gz)
    • passwd (passwd.1ssl.gz, passwd.1.gz, passwd.5.gz)
    • time (time.2.gz, time.3am.gz, time.7.gz)
    • readdir (readdir.2.gz, readdir.3am.gz, readdir.3.gz)
    • random (random.3.gz, random.4.gz, random.7.gz)
    This reveals something else interesting - the section needn't be a number. The two commands are both valid and access completely separate manpages:

    $ man 1ssl passwd
    $ man 1 passwd

    I used to think that each time I opened a manpage I learn something completely new and unexpected - but I never thought I'd find something interesting just by looking at the manpages' gzipped filenames!

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1311625 2018-10-26T10:00:03Z 2018-10-26T10:00:03Z F# - Polymorphic parameter overflow I was mucking around in F# over lunch and started thinking about how it assigns names to type variables (see "Automatic Generalization" of the Type Inference page of the F# docs for more info). By way of introduction, let's create a function foo that takes a single parameter without an explicitly defined type and see what its signature looks like:
    > let foo x = x;;
    val foo : x:'a -> 'a
    
    So the parameter x is assigned the type variable 'a - which makes sense, the first unknown type gets named after the first letter of the alphabet. And of course it follows that if we have a function with two parameters the second type is called ...
    > let foo x y = (x,y);;
    val foo : x:'a -> y:'b -> 'a * 'b
    ... 'b! Ok now what happens when we've got a function with a ridiculous amount of parameters? We'll run out of lower-case letters eventually. When that happens do start using upper-case letters? Non-ASCII? Something else? I quickly hacked together a python script to generate functions with arbitrary parameters, created one with 40 and pasted it into the F# REPL and ...
    > let foo x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 x21 x22 x23 x24 x25 x26 x27 x28 x29 x30 x31 x32 x33 x34 x35 x36 x37 x38 x39 = (x0,x1,x2,x- 3,x4,x5,x6,x7,x8,x9,x10,x11,x12,x13,x14,x15,x16,x17,x18,x19,x20,x21,x22,x23,x24,x25,x26,x27,x28,x29,x30,x31,x32,x33,x34,x35,x36,x37,x38,x39);;
    val foo :
      x0:'a ->
        x1:'b ->
          x2:'c ->
            x3:'d ->
              x4:'e ->
                x5:'f ->
                  x6:'g ->
                    x7:'h ->
                      x8:'i ->
                        x9:'j ->
                          x10:'k ->
                            x11:'l ->
                              x12:'m ->
                                x13:'n ->
                                  x14:'o ->
                                    x15:'p ->
                                      x16:'q ->
                                        x17:'r ->
                                          x18:'s ->
                                            x19:'t ->
                                              x20:'a1 ->
                                                x21:'a2 ->
                                                  x22:'a3 ->
                                                    x23:'a4 ->
                                                      x24:'a5 ->
                                                        x25:'a6 ->
                                                          x26:'a7 ->
                                                            x27:'a8 ->
                                                              x28:'a9 ->
                                                                x29:'a10 ->
                                                                  x30:'a11 ->
                                                                    x31:'a12 ->
                                                                      x32:'a13 ->
                                                                        x33:'a14 ->
                                                                          x34:'a15 ->
                                                                            x35:'a16 ->
                                                                              x36:'a17 ->
                                                                                x37:'a18 ->
                                                                                  x38:'a19 ->
                                                                                    x39:'a20 ->
                                                                                      'a *
                                                                                      'b *
                                                                                      'c *
                                                                                      'd *
                                                                                      'e *
                                                                                      'f *
                                                                                      'g *
                                                                                      'h *
                                                                                      'i *
                                                                                      'j *
                                                                                      'k *
                                                                                      'l *
                                                                                      'm *
                                                                                      'n *
                                                                                      'o *
                                                                                      'p *
                                                                                      'q *
                                                                                      'r *
                                                                                      's *
                                                                                      't *
                                                                                      'a1 *
                                                                                      'a2 *
                                                                                      'a3 *
                                                                                      'a4 *
                                                                                      'a5 *
                                                                                      'a6 *
                                                                                      'a7 *
                                                                                      'a8 *
                                                                                      'a9 *
                                                                                      'a10 *
                                                                                      'a11 *
                                                                                      'a12 *
                                                                                      'a13 *
                                                                                      'a14 *
                                                                                      'a15 *
                                                                                      'a16 *
                                                                                      'a17 *
                                                                                      'a18 *
                                                                                      'a19 *
                                                                                      'a20
    
    So 'a up to 't ... but then numbered 'as after that. I've no idea why it stops at 't, then just counts up from 'a. Interestingly this is the case with F# under Mono on Linux and under the official MS runtime on VS2017 on Windows. OCaml, which F# is heavily influenced by, does not exhibit this behaviour - it refers to the unknown types as a..z, a1..z1, a2...z2, etc.

    That's reasonbaly interesting and all, but since I had already written a script to generate these F# files I figured I'd keep trying with more and more parameters to see what happens. After adding a hundred parameters at a time I soon hit an interesting warning between 400 and 500 params:

    Warning: line too long, ignoring some characters
    - 
    It turned out we hit some line-length restriction in either the parser or the readline equivalent that fsi.exe uses. A bit of experimentation later I found that when the function had 429 parameters it parsed just fine ... but 430 parameters caused the warning.

    429 parameters: 4093 characters
    430 parameters: 4104 characters

    Minor sidebar - I was going to just say we hit a restriction where 4096 bytes is the max line length, but obviously 1 character isn't necessarily 1 byte if we're using some Unicode representation, and a nice way to check that we're using Unicode is to punch in identifiers that would definitely be in different encodings in an ASCII representation - I used Georgian and Czech characters:
    > let სამი = 3;;
    val სამი : int = 3
    
    > let čtyři = 4;;
    val čtyři : int = 4
    
    > სამი + čtyři;;
    val it : int = 7
    
    When I changed my program to start each parameter with the Georgian character "ა" (i.e. let bar ა0 ა1... etc) which cannot be represented in a single byte in UTF-8 we hit the error after only 162 parameters. So it's not a character limit, but a buffer of bytes that we filled - and the size of the buffer is ... uh 4096 bytes.

    Since I wanted to pull this thread until I hit some logical conclusion I tweaked my program to split the insanely long lines so they wouldn't hit this error and continued to add more parameters. The next issue I encountered was after trying a function with a couple thousand parameters, where I encountered the following error:
      x1328,
      ^^^^^
    
    /home/sean/stdin(6636,1): error FS0039: The value or constructor 'x1328' is not defined. Maybe you want one of the following:
       x132
       x138
       x1238
       x128
       x1321
    
    So it seems we've bumped up into another limit - somehow 1327 parameters is ok but F# loses track of the 1328th one, thinking it doesn't exist. Is this a bug? Probably, there should maybe be a more helpful error message. Is it important? Probably not, if your code contains a function with upwards of a thousand parameters then this is the least of your problems.
    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1260358 2018-03-12T22:16:20Z 2020-09-06T07:06:55Z Debian - Building XMMS 1.2.11 on a modern linux system Like most people I've recently been consuming all my media via paid streaming services like Netflix and iTunes. The other day however I needed to play an MP3 on my laptop running Debian and instinctively wanted to reach for xmms. Sadly nowadays the original xmms isn't available on Debian, only an "xmms2" package which is much newer and was reworked into some client/server model. I don't really want to figure out how to configure this correctly, to the extent that I was willing to build the original xmms from source ...

    Trying the naive "./configure && make && sudo make install" method doesn't go very well when running Debian stretch:
    sean@seoul ~/d/s/xmms-1.2.11> ./configure
    checking build system type... x86_64-unknown-linux-gnu
    
    MANY LINES LATER
    
    *** The glib-config script installed by GLIB could not be found
    *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in
    *** your path, or set the GLIB_CONFIG environment variable to the
    *** full path to glib-config.
    configure: error: *** GLIB >= 1.2.2 not installed - please install first ***
    As it turns out I wasn't able to find a pre-built version of GLIB 1.x or (a subsequent dependency) GTK 1.x, I found some sources (GLIB 1.2 and GTK+ 1.2) but these were hitting an error when running ./configure which indicated that the CPU wasn't supported. These libraries pre-date the x86-64 era so my processor wasn't recognised. The fix was to simply drop in a newer config.sub. There was one more issue with the G_GNUC_PRETTY_FUNCTION macro but I resolved that too - I put them onto GitHub as glib-1.x and gtk-1.x in case anyone else wants to use this. Installing them is easy:
    $ git clone https://github.com/smcl/gtk-1.x
    $ cd gtk-1.x
    $ ./configure --prefix=/usr && make
    $ sudo make install
    
    $ git clone https://github.com/smcl/glib-1.x
    $ cd glib-1.x
    $ ./configure --prefix=/usr && make
    $ sudo make install
    Once these are in place we can grab the "latest" old XMMS sources from xmms.org and build those:
    $ curl -LO http://www.xmms.org/files/1.2.x/xmms-1.2.11.tar.gz
    $ tar -xzf xmms-1.2.11.tar.gz
    $ cd xmms-1.2.11
    $ ./configure && make
    $ sudo make install

    Then if all is well then the original (and best!) xmms should be installed into your path, so you can go download some lovely skin of a brushed aluminium late-90s Sony CD player ... though it might be a little bit tiny if you use a HiDPI screen:


    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/854129 2018-03-02T11:00:00Z 2019-11-19T23:42:47Z Czech - Cases are hard

    Grammatical cases are a common stumbling block for native English speakers learning another language. This might be because cases are sort of invisible in English and they're not taught in school so it's it's hard to see why they would even matter.

    However in languages like Czech cases are a really important concept, and if you don't wrap your head around them you'll struggle to make yourself understood. To fully comprehend this importance we need an example - Petr and Pavel, who are not friends.

    Accusative - Peter is punching Pavel

    In Czech the verb "to punch" is "bít", whose third person conjugation is "bije" so if we want to describe this situation in Czech we might naively start out with something like this ...

    Petr bije Pavel

    However this isn't quite enough because it's not actually clear who is punching who. You should be able to rearrange the parts of this sentence in many ways in Czech without any ambiguity. But if we can rearrange the sentence as Pavel bije Petr or bije Pavel Petr then how do we tell our Czech friends who is the puncher and who is the punchee? This is where we need to modify case of some words to help clear things up a little.

    In Czech we indicate the subject (puncher) using the nominative case. In our sentence this is Petr and the nominative case of Petr is simply Petr.

    The object (the punchee) of the sentence is indicated using accusative case - which for Pavel is Pavla. Which gives us:

    Petr bije Pavla

    So hopefully you can see why exactly cases are so important, if you don't learn them you're going to confuse a lot of Czech people and have a lot of frustrating conversations. Let's take Petr and Pavel and explore some other cases.

    Genitive - Petr reads Helena's book

    If Petr is finished punching Pavel and just wants chill with his friend Helena's book, we need to use the Genitive case to indicate it belongs to her:

    Petr čte knihu Heleny

    Subject = nominative of Petr = Petr

    Verb = third person singular conjugation of čist = čtu

    Object = kniha in accusative case = knihu

    Possessor = Helena in genitive case = Heleny

    Dative - Petr reads to Pavel

    If he decides to read the book to Pavel in a curious attempt at reconciliation, we need to use the Dative case:

    Petr čte knihu Pavlovi

    Receiver = Pavel in dative case = Pavlovi

    Instrumental - Petr reads with Pavel

    If this reconciliation is successful and Petr reads the book with Pavel we need to use the Instrumental case:

    Petr čte knihu s Pavlem

    Instrument = Pavel in instrumental case = Pavlem

    Locative - Petr reads about Pavel

    Maybe it's weird to describe Pavel as the "instrument", but just go with it because in the next sentence Petr is reading a book about Pavel and in this situation we use the Locative case:

    Petr čte knihu o Pavlovi

    preposition "o" (meaning "about") requires locative case of Pavel = Pavlovi

    Vocative - Petr says goodbye to Pavel

    Finally, to draw this ridiculous situation to a close Petr says goodbye to Pavel where we use the Locative case:

    Na shledanou Pavle!

    addressing Pavel requires vocative case of Pavel = Pavle!

    Conclusion 

    This is only a quick-n-dirty summary restricted to the seven Czech cases, but it should indicate why each case is important. The Ústav pro jazyk český Akademie věd ČR (Language Institute of the Czech Academy of Science) have a really useful page if you want to see what a word looks like in different cases: http://prirucka.ujc.cas.cz. Just enter your word into the "Slovníková část" section and hit "Hledej"

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1243826 2018-02-09T11:00:05Z 2018-02-09T11:00:06Z IIS - Annoying error with .NET Core
    I deployed a .NET Core application recently into an environment which previously had only run .NET Framework apps, but got a tricky HTTP Error 500.19 - Internal Server Error suggesting there was a problem with my Web.config:


    What was also peculiar was that when I tried to enable stdout logging using the IIS UI I got an error "There was an error while performing this operation" with blank "Details" and "Error" but with the "Filename" set to the path to my Web.config. This was true for any of the configuration sections:


    However after a lot of head scratching and googling turned up nothing my colleague realised that this environment was missing the Windows Server Hosting bundle, available from the .NET Core downloads section under the "Other Windows Downloads" section. Once this was installed everything worked as expected.
    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1223698 2018-02-02T11:00:05Z 2018-12-10T18:02:27Z Czech - Telling The Time
    I've been learning Czech on and off for the last few years, but now and again I discover that there are some basic things that I never quite learned properly. The most recent of these was my ability to tell the time. When I wanted to learn this properly I found that there are very few places that teach this at the level I wanted - they were either too simplistic, misleading or written in a way that is much too hard to quickly digest. I ended up writing the below for myself and for anyone else who wants to learn.

    To simplify each explanation I'm assuming you're familiar with the following:
    • what I mean by "Nominative", "Genitive" and "Accusative" case
    • the verb "být"
    • the noun "hodina"
    • numbers from 1 - 60

    Additionally plurals behave kinda funny in Czech. My friends and I all first learned this when counting beer so I'm gonna use "beer plurals" as shorthand for the following behaviour when you have ...

    • one of something you give the noun in singular Nominative (jedno pivojedna hodina, jedna minuta)
    • two, three or four of something you give the noun in plural Nominative (dvě piva, dvě hodiny, dvě minuty)
    • five or more of something you give the noun in plural Genitive (pět pivpět hodinpět minut)

    Quick Reference

    Firstly in case you want to just quickly reference, here's a table that demonstrates most of the cases:

    time Czech time Czech
    1:00 je jedna hodina 4:00 jsou čtyři hodiny
    1:10 je deset minut po jedné 4:10 je deset minut po čtvrté
    1:15 je čtvrt na dvě 4:15 je čtvrt na pět
    1:30 je půl druhé 4:30 je půl paté
    1:45 je tři čtvrtě na dvě 4:45 je tři čtvrtě na pět
    1:50 je za deset minut dvě 4:50 je za deset minut pět
    2:00 jsou dvě hodiny 5:00 je pět hodin
    2:10 je deset minut po druhé
    5:10 je deset minut po paté
    2:15 je čtvrt na tři
    5:15 je čtvrt na sest
    2:30 je půl třetí
    5:30 je půl sesté
    2:45 je tri čtvrtě na tři
    5:45 je tři čtvrtě na sest
    2:50 je za deset minut tři
    5:50 je za deset minut šest hodin
    3:00 jsou tři hodiny
    3:10 je deset minut po třetí
    3:15 je čtvrt na čtyři
    3:30 je půl čtvrté
    3:45 je tři čtvrtě na čtyři
    3:50 je za deset minut čtyři

    Whole Hours

    If it's exactly H o'clock, we say something like "it is H hours". The word for "hours" is "hodina" and this is the first example of the beer plurals I described above. There are only 12 possibilities here so it's not too hard to just memorise the below:

    Half Hours

    Times that in English are exactly half past an hour in Czech are said to be a half of the next hour, so instead of "half past one" we say something like "a half of two". There's a minor complication - the hour is given in the genitive singular feminine. They behave like adjectives because ultimately they are adjectives modifying the noun "hodina". So je půl jedné means "it is half of the first (hour)"

    Again there are only 12 combinations so if my description doesn't make much sense you can just memorise the below without too much trouble:

    Quarter Hours

    Similar to half-hours when we talk about quarter past or quarter to an hour in Czech we talk about quarters of an hour. So 12:15 is "a quarter of one" or čtvrt na jednu (the hour part is in the Accusative case) and 12:45 is "three quarters of one" or tři čtvrtě na jednu (with čtvrtě being the plural of čtvrt)

    Minutes After The Hour

    If we have a time up to half past the hour, we write it really similar to the English - so 1:10 is "je jedna minuta po jedné". It's tricky:
    • "minuta" behaves like our "beer plurals" - 1 minuta, 2/3/4 minuty, 5/6/7/etc minut
    • the hour is the genitive singular, and since "hodina" is feminine we have jedné, druhé, třetí, etc

    Minutes Before The Hour

    Finally for minutes before an hour, we write something like "in M minutes it is H o'clock" - so 1:50 this is "je za deset minut dve". Again we have the "beer plural" for minutminuta, minuty, minut.

    24 Hour Time. 

    When reading from a watch, computer of phone it seems many Czechs will just say the hour and limit component separately. For example the other day when we were leaving the Maximus spa my girlfriend asked the lady at the counter what time the shuttle bus to the nearest tram stop left, she pulled up a schedule and said “šestnáct dvacet” - 16:20.

    The only quirk is 1-9 minutes past the hour where you say the minutes with a leading nula - so 16:05 would be šestnáct nula pět.


    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1202205 2017-11-03T11:00:06Z 2017-11-03T11:00:06Z Firefox Developer - Gnome and Debian 9 quickstart

    Using Firefox on Debian 9 is a little frustrating as the packages available from default APT sources are older “ESR” releases (firefox-esr, 52.0 at the time of writing). Getting the Dev or Nightly builds is pretty straight forward but if you use Gnome you probably want a launcher, and it might not be very obvious how to do this.

    First grab the sources or prebuilt binaries:

        $ curl -LO "https://download.mozilla.org/?product=firefox-devedition-latest-ssl&os=linux64&lang=en-US"

    Extract them into /opt/firefox-dev:

        $ tar -xjf firefox-57.0b12.tar.bz2 && sudo mv firefox /opt/firefox-dev

    Open up a text editor and create /use/share/applications/firefox-dev.desktop as follows

        [Desktop Entry]
        Name=Firefox Dev
        Comment=Browse the World Wide Web
        GenericName=Web Browser
        X-GNOME-FullName=Firefox Dev Web Browser
        Exec=/opt/firefox-dev/firefox %u
        Terminal=false
        X-MultipleArgs=false
        Type=Application
        Icon=firefox-dev
        Categories=Network;WebBrowser;
        MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;
        StartupNotify=true
    

    Copy the icon and run gtk-update-icon-cache so that the icon appears as expected.

        $ sudo cp /opt/firefox-dev/browser/icons/mozicon128.png /usr/share/icons/hicolor/128x128/apps/firefox-dev.png
        $ sudo gtk-update-icon-cache -f /usr/share/icons/hicolor

    And that's it! You should have a nice desktop icon for Firefox Developer Edition you can use. I also did the same for Firefox Nightly:

    For updates you can clean out the directory and repeat the process with the latest tar.bz2 file ... or you can change the permissions in the firefox-dev directory so you have write access, and auto-updates will work.







    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1152920 2017-05-19T10:00:01Z 2020-10-15T19:04:55Z Arduino - 5x8 ISO 8859-2 font

    A while ago I took an existing 3x8 font, converted it by hand for use with AdaFruit's graphics library and subsequently modified it to use ISO 8859-2 characters (in a presumably innocent coincidence this popped up in the Adafruit GFX Library a few months later). Anyway, I recently received a request to perform the same modification to the standard 5x8 fonts and only recently got round to looking at this. As a quick reminder, ISO 8859-2 covers ~128 extra characters for alphabets used by Central and Eastern and Southern European languages - these characters look like this:

    After a little bit of experimentation I realised that the extra 2 columns aren't really all that useful for adding the extra ligatures (čarky, hačky, umlauts etc) - we're really constrained by vertical space. Many of the existing letters need to be modified or reworked entirely, since they consume the almost entire 8 rows and we need 2 rows for some ligatures. For example Ä is currently implemented as the following:

        0x7D, 0x12, 0x11, 0x12, 0x7D

    Which looks like this:

    This is visually a little confusing, but more importantly we cannot really re-use it for ISO 8859-2 since some of the ligatures we need to add to the "A" require at least two rows. Instead of having an "A" which jumps around depending on the ligature, I've created a single A for when a ligature is used and left the ungarnished original letter alone.

    Just as another example of why this can be tricky the existing ä looks really weird to me, the umlauts are skewed off to the left and look like they're joined to the letter itself. 

    I've moved this up into a central position which is the same on all letters involving umlauts. This is purely based on personal taste, but I think it looks better - below is the original style compared to my modified version:

    There are similar considerations in some of the other letters that are left as an exercise for the reader - see if you can devise a neat system to fit all the letters below into 3x8 grid in a way that is consistent and legible, it's pretty tricky. I've made an initial stab at this (see GitHub gist below), but after revisiting this I've realised how flakey and error-prone this process of creating fonts is. 



    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1144074 2017-04-28T10:00:03Z 2020-10-15T19:05:10Z F# - some Project Euler patterns

    To help me get used to F# and relearn the ways of functional programming I've been working through Project Euler in a Jupyter IfSharp notebook and keeping my solutions on GitHub at https://github.com/smcl/ProjectEulerJupyter

    After around 50 problems so far I've spotted a handful of patterns which had either a number of possible solutions or were a pain to type out (or copy/paste) each time I used them. I explored them each in a little more detail to find the most optimal implementation of each pattern. The reason I wrote this up is that even though the problems are pretty simple, some of the results were pretty surprising.

    For each of the patterns I've got a simple definition, some solutions and a set of benchmark results in a table. In each results table I've highlighted the most optimal solution that fully expands the result set (so the lazily evaluated solutions that "complete" in a millisecond don't count) so that we can have a realistic idea of what the best solution is.

    Combinations of two lists 

    The first item is the simplest of the four problems - if we have two lists foo and bar, produce a list of pairs featuring all combinations of elements of foo with bar. So given something like ...

    let foo = [ 'a'; 'b'; 'c' ]
    let bar = [ 1; 2; 3 ]
    

    We expect to see a list like this ...

    [ 
        ('a', 1); ('a', 2); ('a', 3)
        ('b', 1); ('b', 2); ('b', 3)
        ('c', 1); ('c', 2); ('c', 3)
    ]

    I came up with only three solutions - I don't feel like I have an ideal solution to this, just the least hacky variant of the first solution that popped up in my mind.

    pair_list: The first solution, call List.map for every member of one list, then in the function argument call List.map again for every member of the second - flatten the resulting list using List.concat

    let pair_list l1 l2 =
        List.map (fun x -> l2 |> List.map (fun y -> (x,y))) l1
        |> List.concat    
    

    pair_seq: as above, but assume we can have sequences as input, so we can produce the (fairly large) output array lazily:

    let pair_seq s1 s2 =
        Seq.map (fun x -> s2 |> Seq.map (fun y -> (x,y))) s1
        |> Seq.concat
    

    pair_seq_expanded: as above, expand fully to a List for an idea of how long it takes to operate on the whole output:

    let pair_seq_expanded s1 s2 =
        Seq.map (fun x -> s2 |> Seq.map (fun y -> (x,y))) s1
        |> Seq.concat
        |> Seq.toList
    

    pair_seq_for: it's pretty clear that using a sequence is preferable here, especially if you need to work with 1000 x 1000 collections, so the final variant is a slight rewrite of the second, using a for loop and yield-ing the resulting tuple.

    let pair_seq_for s1 s2 = 
        [ for x in s1 do 
            for y in s2 do 
                yield (x,y) ]
    

    To compare the performance of these three I've defined 100 and 1000 element lists/sequences and measured how long it takes to iterate through each sequence pair performing a simple operation (accumulating the difference between the pairs).

    time to create n-element collection of pairs in milliseconds

    method n=10000 n=100000 n=1000000 n=10000000
    pair_list 1.5096 14.7937 226.0501 2927.2477
    pair_seq 0.8690 0.8690 0.8846 0.9028
    pair_seq_expanded 3.3952 21.5028 184.3353 2264.2805
    pair_seq_for 3.2361 12.5183 180.1352 1997.3700

    So thankfully the cleanest looking pair_seq_for solution is actually the fastest when we get to larger data sets. This isn't quite where the story ends though. There's a nice discussion here on Stack Overflow about a similar but slightly different problem - finding n element combinations of a single list - so for

    let someList = [ 1; 2; 3 ]
    

    ... we wanted a function combs (n:'a) (lst:'a list) which would produce something like the below for combs 2 someList

    [
        ( 1, 2 ); ( 1, 3 )
        ( 2, 1 ); ( 2, 3 )
        ( 3, 1 ); ( 3, 2 )
    ]
    

    This is different from the problem I posed, but I've got a GitHub gist here where I've turned them all loose on the same set of data, and performed some simple measurements. 

    Pairing elements of Collections with their indexes

    In a couple of places I found myself wondering if F# collections had an equivalent of python's enumerate - which is a function which wraps a list and returns an index/element pair for each loop iteration:

    letters = [ "a", "b", "c", "d" ]
    for i, c in enumerate(letters):
        print "%d: %s" % (i, c)
    
    # output:
    #     0: a
    #     1: b
    #     2: c
    #     3: d
    

    It took a little while before I spotted Array.mapi so I ended up working through and measuring a handful of different ways first - some are obviously pretty poor (particularly those using recursion) but I left them in nonetheless:

    enumerate_by_for_seq - using a Seq to generate the index and return a pair

    let enumerate_by_for_seq (a:string []) =
        seq { for i in 0 .. (a.Length-1) -> (i, a.[i]) }
    

    enumerate_by_for_seq_expanded - as previous, but returning a List to fully expand the sequence

    let enumerate_by_for_seq_expanded (a:string []) =
        seq { for i in 0 .. (a.Length-1) -> (i, a.[i]) }
        |> Seq.toList
    

    enumerate_by_for_list - iterating over each index using a for loop, returning a (int * string) list

    let enumerate_by_for_list (a:string []) =
        [ for i in 0 .. (a.Length-1) -> (i, a.[i]) ]
    

    enumerate_by_for_array - as above but returning (int * string[], note that this seems ridiculously similar, but differs surprisingly in performance (I discovered this by accident and included it in this experiment because of the difference!)

    let enumerate_by_for_array (a:string []) =
        [| for i in 0 .. (a.Length-1) -> (i, a.[i]) |]
    

    enumerate_by_map - generating a list of indexes and then using |> and List.map to create the index/element pair (i.e. the same as the first approach, but using List)

    let enumerate_by_map (a:string []) =
        [0..(a.Length-1)]
        |> List.map (fun i -> (i, a.[i]))
    

    enumerate_by_recursion_array - bonkers approach, abusing Array.append and recursing. Just don't do this...

    let rec enumerate_by_recursion_array' i (a:string[]) =
        match i with
        | 0 -> [||]
        | _ -> Array.append [| (i, a.[i]) |] (enumerate_by_recursion_array' (i-1) (a.[1..]))
    
    let enumerate_by_recursion_array (a:string[]) =
        enumerate_by_recursion_array' (a.Length-1) a
    

    enumerate_by_recursion_list - List variant of the above. Don't do this either

    let rec enumerate_by_recursion_list' i (a:string[]) =
        match i with
        | 0 -> []
        | _ -> [ (i, a.[i]) ] @ (enumerate_by_recursion_list' (i-1) (a.[1..]))
        
    let enumerate_by_recursion_list (a:string[]) =
        enumerate_by_recursion_list' (a.Length-1) a

    enumerate_by_for_zip - Using Array.zip - shortest solution, the best until I spotted Array.mapi

    let enumerate_by_zip (a:string[]) =
        Array.zip a [|0..(a.Length-1)|]

    enumerate_by_for_mapi - Probably the most "correct" solution, using Array.mapi

    let enumerate_by_mapi (a:string[]) =   
        Array.mapi (fun i x -> (i,x)) a

    enumerate_by_for_parallel_mapi - As above but naively switching in Array.Parallel.mapi without any other changes

    let enumerate_by_parallel_mapi (a:string[]) =
        Array.Parallel.mapi (fun i x -> (i,x)) a

    time taken to enumerate n element collection (milliseconds)

    method n=10000  n=100000 n=1000000 n=10000000
    enumerate_by_for_seq 0.3385 0.3496 0.3471 0.3540
    enumerate_by_for_seq_expanded 2.6177 18.8341 205.4403 3610.3913
    enumerate_by_for_list 1.3487 22.1703 248.5039 4200.8530
    enumerate_by_for_array 2.1619 12.8186 192.3148 3178.5893
    enumerate_by_map 2.0391 26.2468 287.2852 4179.3407
    enumerate_by_recursion_array 7760.3141 n/a*  n/a* 
    n/a*
    enumerate_by_recursion_list 5368.5472 n/a* 
    n/a* 
    n/a*
    enumerate_by_zip 7.1136 9.4388 170.0941 1917.8617
    enumerate_by_mapi 2.6911 13.0303 116.5348 1268.8625
    enumerate_by_parallel_mapi 8.1293 17.7548 102.2350 1379.0431

    * = this took way too long so I killed it

    Obviously Array.mapi was the fastest overall. However it wasn't as much faster than Array.zip as I would have imagined, and I suspect that I'm doing something wrong with Array.Parallel.mapi. Also interesting is that while the super-fast performance of the enumerate_by_for_seq method dissipates somewhat when fully evaluated, it is still faster than the equivalent enumerate_by_for_list version.

    Pandigitals

    "Pandigital" numbers feature relatively frequently in Project Euler. An n-digit pandigital number will contain all digits from 0..or 1..(n-1) once in some order. For example 41523 is a 1-5 pandigital, and 43210 is 0-4 pandigital. These numbers are mentioned in 32, 38, 41, 104, 118, 170 (and perhaps more) so a relatively efficient way to recognise them is pretty useful to have at hand. 

    Again there's a few ways we can do this - in each case I can think of we start with taking the string representation of the input number and splitting it up using ToCharArray() and with this we can do a number of different things.

    pandigital_strcmp - sort array, map each element to string, sort, create string + compare to "123..n"

    let pandigital_strcmp (n:int) = 
        let sorted = new string (string(n).ToCharArray() |> Array.sort)
        sorted = pandigitalString
    

    pandigital_intcmp - sort array, map each element to string, sort, create string, cast to int + compare to 123..n

    let pandigital_intcmp (n:int) = 
        let sorted = new string (string(n).ToCharArray() |> Array.sort)
        int(sorted) = pandigitalInt
    

    pandigital_arrcmp - sort array, string, cast to int + compare to existing array [| '1'; '2'; .. n |]

    let pandigital_arrcmp (n:int) = 
        pandigitalArray = (string(n).ToCharArray() |> Array.sort)

    pandigital_set_difference - convert to Set and compute difference from precalc'd set, pandigital if empty

    let pandigital_set_difference (n:int) = 
        string(n).ToCharArray()
        |> Set.ofArray
        |> Set.difference pandigitalSet
        |> Set.isEmpty
    

    pandigital_array_contains - for each element in precalculated pandigital array, check it's present in array and use List.fold to ensure all true

    let pandigital_array_contains (n:int) = 
        let a = string(n).ToCharArray()
        pandigitalArray 
        |> Array.map (fun c -> Array.contains c a) 
        |> Array.fold (fun e acc -> e && acc) true

    So I tested these against using the code to measure how quickly each method was in applying 

    // where panDigitalInt is the upper limit ("n" in the table)
    let testNumbers = [ 0 .. pandigitalInt ]
    let bench name f =
        let sw = Stopwatch.StartNew()
        let res = testNumbers |> List.filter f |> List.length
        sw.Stop()
        printfn "%s: %f ms" name sw.Elapsed.TotalMilliseconds

    time taken to filter pandigitals in [0..n] in milliseconds

    method n=1234 n=12345 n=123456 n=1234567
    pandigital_strcmp 2.1081 11.2639 113.2086 1356.1985
    pandigital_intcmp 0.9716 9.7646 89.3238 947.0513
    pandigital_arrcmp 2.4441 6.1932 59.7014 618.0665
    pandigital_set_difference 2.5024 17.2115 199.2863 1986.9592
    pandigital_array_contains 0.9790 4.8161 50.447 565.6698

    So it seems Array.contains wins overall. The Set.difference approach was pretty dismal which was disappointing - it came to me when I was out walking my dog and I rushed back to write it and benchmark it. I think Set.ofArray is perhaps a little slow, but I haven't done any investigation into it.

    It's worth noting that you probably shouldn't do something like [0..bigNumber] |> List.filter pandigital_array_contains to start with - maybe it's worth approaching the problem from a different angle in some cases.

    Sorting a 3-element tuple

    OK this only came up once and was part of a pretty poor solution I had for problem 39 (original, solution) - I generated thousands of tuple featuring 3 of integers and then tested whether they could represent the sides of right-angled triangles using Pythagoras' theorem. However since they were in no particular order I thought I needed identify the hypotenuse. I wrote this out long-form since there's only a handful of cases and solved the problem relatively quickly.

    Regardless of whether this was a suitable solution for the problem, I was curious as to what approach works best for sorting these tiny collections of 3 elements.

    I had only three approaches:

    sort_nested_if - use nested if statements to reduce the number of comparisons needed while introducing branches
    let sort_nested_if (a,b,c) =
        if a >= b then
            if b >= c then (a,b,c) else (a,c,b)
        elif b >= a then
            if a >= c then (b,a,c) else (b,c,a)
        else
            if a >= b then (c,a,b) else (c,b,a)
    

    sort_flat_if - have a separate if for each result at the top level

    let sort_flat_if (a,b,c) =
        if a >= b && b >= c then       (a,b,c)
        elif a >= b && b >= c then     (a,c,b)
        elif b >= a && a >= c then     (b,a,c)
        elif b >= a && c >= a then     (b,c,a)
        elif (*c >= b &&*) a >= b then (c,a,b)
        else (*c >= b && b >= a then*) (c,b,a)
    

    sort_array - create an array, use Array.sort and map the results back into a tuple when returning the result

    let sort_array (a,b,c) =
        let sorted = Array.sort [| a;b;c |]
        (sorted.[0], sorted.[1], sorted.[2])
    

    To test these I generated large arrays of size 4000, 40000 and 400000 3-element tuples and timed how long each method took to sort them.

    time taken to sort n 3-element tuples, in milliseconds

    method n=4000 40000 400000
    sort_nested_if 1.2626 13.9014 193.3619
    sort_flat_if 1.7864 23.4633 258.2538
    sort_array 1.2424 11.9907 132.4312

    OK now it's probably obvious why I didn't just bin this little experiment - sort_array is surprisingly the clear winner. I would have guessed that the overhead of building an array and calling Array.sort function on a list way smaller than you'd normally need to sort would be insane. But apparently it's not!

    Conclusion

    It's surprising how many different ways you can write some relatively simple algorithms. Some of them are obviously pretty awful, like the recursive enumerate function (though I'm sure I can rearrange it to take advantage of tail call elimination) - and some were surprisingly performant, like the sort_array function in the final example. I've noticed that some other Project Euler people maintain a sort of library of functions they can re-use. Eventually I'd like to do something like this, but until it becomes unmanageable I'll just keep a Gist on GitHub:

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1134214 2017-03-03T11:00:05Z 2017-03-08T17:55:14Z Fedora - Jupyter on a Linux remote using systemd

    When you want to do some experimentation or put together a simple code-based presentation Jupyter notebooks are a powerful tool to have at your disposal. But if you use a number of devices over a few locations it can be useful to have a single instance hosted somewhere central (Linode, Digital Ocean, wherever) that you can access from any device wherever you are. There are a handful of ways that you can achieve this:

    1. log in to your remote machine, set Jupyter up and run jupyter notebook (perhaps in a tmux session) then log out - do this whenever your machine reboots
    2. as above but using an existing docker image
    3. spin up an Azure notebook
    4. ... or we could do something like #1 - but have it setup under a separate user and administered via a systemd service

    All four of the above are fine for different reasons and use-cases but here I'll talk about how I put #4 together in a little Linode instance running Fedora 25 - it's relatively simple, you can control over the kernels installed, and it's another excuse to get a bit more in-depth with another Linux subsystem (systemd).

    Requirements

    All you need is a Linux system which uses systemd (Fedora 15.0 or newer, Debian 8.x or newer, Ubuntu 15.04 or newer, for example) which you have sudoer level access on, and Python 3.x. It's probably pretty straight-forward to set this up on systems using the SysV init but I won't cover them here.

    Install and Set Up Jupyter 

    First thing we need to do is install Jupyter and set up the user context which the Jupyter will be run under - which is a user called "jupyter":

    $ sudo python3 -m ensurepip
    $ sudo pip install jupyter
    $ sudo useradd jupyter
    $ sudo passwd jupyter

    Next we should switch to the new jupyter user, create the directory our notebooks will live in and generate the Jupyter config we'll mess around with:

    $ su - jupyter
    $ mkdir notebooks
    $ jupyter notebook --generate-config
    

    The last command will create a new file ~/.jupyter/jupyter_notebook_config.py which we'll do a little messing around with shortly, but before this we'll set up a password 

    $ python3
    Python 3.5.2 (default, Sep 14 2016, 11:28:32) 
    [GCC 6.2.1 20160901 (Red Hat 6.2.1-1)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from notebook.auth import passwd
    >>> passwd() # below I enter "password123"
    Enter password: 
    Verify password: 
    'sha1:2eff88aac285:385c87867bd18fe852ee1d56b1010d4beed96969'
    

    This will be used to log in to the application when its running. Open up the ~/.jupyter/jupyter_notebook_config.py file in a text editor and add/modify the following lines (using the SHA1 hash returned by the above):

    c.NotebookApp.port = 8888
    c.NotebookApp.ip = '0.0.0.0'
    c.NotebookApp.password = 'sha1:2eff88aac285:385c87867bd18fe852ee1d56b1010d4beed96969'

    Setting up a Jupyter systemd service

    Now we want to create a new systemd service so we can make sure our Jupyter notebook runs on startup, handles logging nicely and has all the other bells-and-whistles afforded to us by systemd. This is surprisingly simple - we want to create a new file jupyter.service in /usr/lib/systemd/system - this will tie together our newly installed Jupyter software and our newly setup jupyter user - using your favourite text editor create it so it looks like the below:

    $ sudo cat /usr/lib/systemd/system/jupyter.service
    [Unit]
    Description=Jupyter
    
    [Service]
    Type=simple
    PIDFile=/var/run/jupyter.pid
    ExecStart=/usr/bin/jupyter notebook --no-browser
    WorkingDirectory=/home/jupyter/notebooks
    User=jupyter
    Group=jupyter
    
    [Install]
    WantedBy=multi-user.target%
    

    Now all that's left to do is cross our fingers, enable our services, kick them off and browse to our remote box and login with our password:

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable jupyter
    $ sudo systemctl start jupyter
    

    And if you want you can stop here - bookmark your http://www.xxx.yyy.zzz:port address and you're all set!

    Conclusion

    This was initially just an experiment - an excuse to test out my ability to put together a systemd .service file and do something more with a mostly-idle linux server sitting in a facility somewhere in Amsterdam. However I have found that I really like using this setup. When I was first shown Jupyter (née IPython) I was unimpressed and didn't see the point. However over the last few days I've been working through Project Euler problems again while teaching myself F# (using the IfSharp kernel) and I have found that it lends itself very well to my problem solving workflow on Project Euler.

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1131926 2017-02-24T11:00:03Z 2018-02-18T12:32:52Z Fedora - getting Tor and SELinux to play nice

    tl;dr 

    If you have weird SELinux permissions issues using Tor on Fedora, skip to "The Solution" we're basically gonna add a couple of custom SELinux policies and update the permissions on the /var/lib/tor directory.

    Intro

    I'm trying to get a little bit out of my cosy Debian comfort zone, and since I have a few friends working at Red Hat figured I'd try out Fedora. While I was teaching myself about systemd however I ran into an issue - starting up a Tor hidden service using systemd was fine immediately after it was installed, but after a reboot it'd repeatedly fail - the following is what is displayed when I ran systemctl status tor.service:

        tor.service - Anonymizing overlay network for TCP
           Loaded: loaded (/usr/lib/systemd/system/tor.service; enabled; vendor preset: disabled)
           Active: failed (Result: start-limit-hit) since Mon 2017-02-20 11:16:44 CET; 2min 42s ago
          Process: 1150 ExecStartPre=/usr/bin/tor --runasdaemon 0 --defaults-torrc /usr/share/tor/defaults-torrc -f /etc
        
        Feb 20 11:16:44 localhost.localdomain systemd[1]: tor.service: Service hold-off time over, scheduling restart.
        Feb 20 11:16:44 localhost.localdomain systemd[1]: Stopped Anonymizing overlay network for TCP.
        Feb 20 11:16:44 localhost.localdomain systemd[1]: tor.service: Start request repeated too quickly.
        Feb 20 11:16:44 localhost.localdomain systemd[1]: Failed to start Anonymizing overlay network for TCP.
        Feb 20 11:16:44 localhost.localdomain systemd[1]: tor.service: Unit entered failed state.
        Feb 20 11:16:44 localhost.localdomain systemd[1]: tor.service: Failed with result 'start-limit-hit'.
    

    The Problem

    Looking a little closer at the logs in journalctl it seems that the tor process is not able to access the directory structure under /var/lib/tor - the toranon user's home directory. 

        Feb 20 11:16:43 localhost.localdomain tor[1150]: Feb 20 11:16:43.033 [warn] Directory /var/lib/tor/ssh/ cannot be read: Permission denied
        Feb 20 11:16:43 localhost.localdomain tor[1150]: Feb 20 11:16:43.033 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
        Feb 20 11:16:43 localhost.localdomain tor[1150]: Feb 20 11:16:43.034 [err] Reading config failed--see warnings above.
    

    This appears to be down to SELinux kicking in and telling us the process is trying to do something it's not explicitly permitted to do according the the SELinux policies currently loaded. A quick google search for this error turns up a handful of results from other Fedora users:

    In each of these a couple of workarounds are proposed, like disabling SELinux and setting the hidden service directory to be the /var/lib/tor directory. Disabling SELinux would probably work fine, but I'd rather not do that - I'd rather understand what's going on and eventually fix it properly. I also don't want to use the other workaround - since that would prevent me from running two separate hidden services, and what if I want to run ssh and operate a cryptocurrency-based online drugs supermarket[1]?

    After a bit more digging I found a bug report on Red Hat's Bugzilla which described exactly the problem I saw (working, ten failing after reboot). However it also confirmed that as-at 14th February 2017 this was still an open issue (poor Kyle Marek spent his Valentines Day debugging Tor) - https://bugzilla.redhat.com/show_bug.cgi?id=1375369 so there's no "proper" fix yet.

    The Solution

    Until there's an "official" solution there is a semi-smart workaround proposed by the SELinux Alert Browser - to generate a local policy module to permit the access that SELinux restricted. The following steps assume that you've setup a hidden service in your /etc/tor/torrc and that it's failing to start with some weird permissions error.

    Firstly let's sort out the permissions for the toranon user's home directory - some people reported that the root user owned some folders in this directory which isn't really desirable:

    So let's do this, and sort out the permissions for the toranon user's home directory too.

        $ sudo find /var/lib/tor ! -user toranon
        $ sudo chown toranon /var/lib/tor/some/folder
        $ sudo find /var/lib/tor ! -group toranon
        $ sudo chown :toranon /var/lib/tor/some/folder
    

    In my case /var/lib/tor itself was owned by root - I moved it to toranon just in case. Next let's add an SELinux policy to give the Tor service the permissions it wants:

        $ sudo ausearch -c 'tor' --raw | audit2allow -M tor-workaround
        ******************** IMPORTANT ***********************
        To make this policy package active, execute:
        
        semodule -i tor-workaround.pp
        $ sudo semodule -i tor-workaround.pp
    

    Now after a reboot we should see that the service has successfully started up without any errors

        $ sudo systemctl reboot
        (later...)
        $ sudo systemctl status tor.service
         tor.service - Anonymizing overlay network for TCP
           Loaded: loaded (/usr/lib/systemd/system/tor.service; enabled; vendor preset: 
           Active: active (running) since Sun 2017-02-19 15:49:42 CET; 18min ago
          Process: 768 ExecStartPre=/usr/bin/tor --runasdaemon 0 --defaults-torrc /usr/s
         Main PID: 825 (tor)
            Tasks: 1 (limit: 4915)
           CGroup: /system.slice/tor.service
                   └─825 /usr/bin/tor --runasdaemon 0 --defaults-torrc /usr/share/tor/de
        
        Feb 19 15:49:42 localhost.localdomain Tor[825]: Parsing GEOIP IPv6 file /usr/sha
        Feb 19 15:49:42 localhost.localdomain Tor[825]: Bootstrapped 0%: Starting
        Feb 19 15:49:42 localhost.localdomain Tor[825]: Bootstrapped 80%: Connecting to 
        Feb 19 15:49:42 localhost.localdomain systemd[1]: Started Anonymizing overlay ne
        Feb 19 15:49:42 localhost.localdomain Tor[825]: Signaled readiness to systemd
        Feb 19 15:49:43 localhost.localdomain Tor[825]: Opening Control listener on /run
        Feb 19 15:49:43 localhost.localdomain Tor[825]: Bootstrapped 85%: Finishing hand
        Feb 19 15:49:43 localhost.localdomain Tor[825]: Bootstrapped 90%: Establishing a
        Feb 19 15:49:44 localhost.localdomain Tor[825]: Tor has successfully opened a ci
        Feb 19 15:49:44 localhost.localdomain Tor[825]: Bootstrapped 100%: Done
    

    Conclusion

    It was a little bit unfortunate that I bumped into this when I was trying to familiarise myself with systemd, but it was good to have it sorted and I think that the next thing I should explore is SELinux. Perhaps I could understand and contribute the proper fix since this is just a little temporary workaround. 

    [1] - note: I do not want to run a cryptocurrency-based online drugs supermarket
    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1129990 2017-02-10T11:00:03Z 2017-02-20T12:43:53Z XMonad - Issues after dist-upgrade

    Every other time I run apt-get dist-upgrade my XMonad fails to rebuild with the error "Failed to load interface for âXMonad.Actions.Volumeâ"

    The annoying thing is happens infrequently enough that each time I forget what the resolution is, google a bunch of things, then eventually stumble on the answer and move on. Anyway the reason it's a little tricky is that most pages or posts suggest that the correct resolution is to issue the following command

        $ sudo cabal update && sudo cabal install xmonad-extras
    

    However for some reason I have a bunch of these Haskell packages installed in ~/.ghc and running this command using sudo doesn't update these, so instead I have to run it normally:

        $ cabal update && cabal install xmonad-extras

    And that's it!

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1116436 2017-01-13T11:00:02Z 2017-01-13T11:00:02Z IronPython - Integer performance improvements

    When I was puddling around in IronPython ahead of an upcoming project I spotted something interesting - when we want to deal with integers within the IronPython interpreter we frequently call a function in Microsoft.Scripting.Runtime.ScriptingRuntimeHelpers called Int32ToObject:

        public static object Int32ToObject(Int32 value) {
            // caches improves pystone by ~5-10% on MS .Net 1.1, this is a very integer intense app
            // TODO: investigate if this still helps perf. There's evidence that it's harmful on
            // .NET 3.5 and 4.0
    
            if (value < MAX_CACHE && value >= MIN_CACHE) {
                return cache[value - MIN_CACHE];
            }
            return (object)value;
        }
    
    For integers in the range -100 and 1,000 we maintain a cached array of objects already converted from Int32. This is called a lot even in relatively innocuous looking programs - to see this we can a simple Console.Out.WriteLine("Int32ToObject") in there and count the number of times it occurs in a program that simply prints "Hello, world!" we can see this:
        $ cat hello.py
        print "Hello, world!"
        $ ./bin/Debug/ipy.exe hello.py | grep -c "Int32ToObject"
        1317
    

    The code itself specifically references the pystone benchmark (which I found here) in a comment suggesting that we could see a performance improvement on pystone with versions of .NET newer than 3.5 - which appears to be the minimum version later versions of IronPython supports.

    I built the Release configuration of IronPython both before and after removing this cache functionality, and tested the default pystone benchmark on my work computer (a pretty hefty 8-core Xeon E3-1275 @ 3.6 GHz, with 32GB RAM) - the results are below where I ran the test 10 times and took the average. The values output by the benchmark are "pystones per second" - where one "pystone" is an iteration through the main loop inside the Proc0() function which performs a number of integer operations and function calls:


    Before  After 
    1. 238262 234663
    2. 239115 234595
    3. 245149 245931
    4. 237845 302562
    5. 228906 295027
    6. 248294 275535
    7. 258694 297271
    8. 246650 282791
    9. 235741 296104
    10. 233604 274396
    Average 241226 273887

    So with the fix we see 32,661 more of these iterations per-second, which is roughly a 13.5% improvement. This makes sense - presumably casting each int to object has been improved so that it's nearly free, leaving the overhead being a simple function call.

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1119049 2016-12-30T16:25:51Z 2020-10-15T19:06:17Z 2016 and 2017

    Reflecting on 2016

    It's easy to lose track of achievements and get bogged down in stress in the short term so at the end of the year I like to look back at what I've done and think about what I liked or what went well.

    Debian/Linux

    First off not a huge one but when I replaced my ailing MacBook Air with a Thinkpad X250 I moved to using Linux and Xmonad as my daily driver (posts one, two and three on this). This relatively minor switch actually kicked off a relatively productive year in terms of professional development - I ended up having to re-learn a lot about Linux that I had forgotten, and bit-by-bit ended up learning and creating more and more. Many times on macOS I'd find some library or utility wouldn't work without a lot of config, and I'd often just give up - once I got past the initial "setup my environment" step Debian ended up smoother than macOS.

    Python

    I created and released a handful of Python packages on PyPI (em73xx, rbcz, xnm, xsms) - none of them have huge widespread appeal but they're fairly well written and let me work through the process of releasing on PyPI, producing documentation. I also wrote a fair amount about Python, and worked through Philip Guo's excellent CPython Internals course which I thoroughly enjoyed.

    Blog

    I wrote a total of 28 blog posts in 2016, which is slightly more than one every fortnight. They've only had a few thousand views in total but I've thoroughly enjoyed learning and creating something new, then writing it up. It's good to practice any sort of technical writing (even if it's an informal blog post) and it's pretty rewarding to look back on what you've done. Here's a handful of the ones I enjoyed writing most

    Homepage

    I also put together a new homepage @ www.mclemon.org. My company use ASP.NET MVC for all new UIs, and since I'm not always involved in the front-end development I felt like I was falling behind. Instead of shying away from this and resolving to look into it "later" I tackled it head on and set about creating a simple home page with MVC and deploy it to Azure. What started off as a simple static "Contact" page grew into a neat little aggregator of my Posthaven blogs, which I can extend pretty easily to pull in other stuff (I've tried out GitHub and Twitter, but my commits push everything out of the way, and I rarely post anything useful on Twitter!). It should degrade reasonably well on mobile, but as you can see from the screenshot there's a couple of whitespace issues I clearly need to tackle!

    2017

    I don't like resolutions, but I do think it's good to think ahead about some things I'd like to work on - even if I ultimately get sidetracked on something else.

    IronPython

    I've already written a couple of articles about IronPython, but I'd like to keep diving deeper. I have a really nice series of posts in the works which could end up being useful to the community at large so I'm pretty stoked about getting them completed. I'd ultimately like to start contributing to the project, hopefully by the time I've finished these I'll be knowledgeable enough to be of use to Alex Earl et al.

    Czech

    My Czech has stagnated - I've got some friends I write in Czech to but I'd like to read and write far more regularly. I've found that if I sit down I can break down most sentences with a bit of patience but my vocabulary and sentence construction is still pretty poor. 

    Books

    There are so many books I'd like to read - but in terms of professional development I'd like to revisit some fundamentals - so working my way through the classic SICP is top of the list, but I'd also like to work through Grokking Algorithms In Python as it looked like a nice read.

    Alfie

    My little dog is adorable, but he's way too boisterous - I'd like him to:

    1. return on command every time I call him (without resorting to tricking him with biscuits and the frisbee!)
    2. walk to "heel" better
    3. not jump on strangers
    4. chill out in the pub

    Of course #4 is the most important one :)

    Clock

    I mostly cracked the Brno astronomical clock (aka the cock clock) at summer, and started writing a blog post about it - but I needed to create a couple more timelapses and visualisations to complete the picture and never found time to do it. This clock is kinda famous for being unintelligible so it'd be nice to share the knowledge with the world!

    General Focus

    In general I'd like to focus more on a couple of disciplines or topics and become more of an expert in them. Looking back at my posts I've covered a fairly broad spectrum of things, but I never really went into much detail on any of them. I'm spreading myself a little thinly and it doesn't help that I've got the attention span of a goldfish!


    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1114377 2016-12-23T11:00:04Z 2020-10-15T19:06:33Z Discover a Linux Utility - xssstate

    To learn more about Debian and Linux in general I'm selecting utilities at random from my PATH using the command below, learning what they do and writing a blog post about it. Previously: Part 1Part 2, Part 3

        $ (for folder in `echo $PATH | sed "s/:/\\n/g"`; do ls -1 $folder; done; ) | shuf -n 1 | xargs man

    Today's utility is xssstate, which lets your check the status of X window system's screensaver. It's written by the suckless guys, who've created a number of very good tools, such as surf (a minimalist web browser) and dmenu (autocompleting program launcher), both of which I use regularly.

    The utility itself is pretty simple, there are only four command line switches including -v, so this will be pretty short post. First we can check if the screensaver is currently enabled using -t switch:

        $ xssstate -s
        off
    

    Obviously the screensaver is off, since I am actively using this computer - however if the screensaver was active it'd print "on" and if it was disabled altogether you'd see "disabled".

    To check the time idle in milliseconds, use the -i switch:

        $ xssstate -i
        2
        $ sleep 5 && xssstate -i
        4947
    

    And to get time in milliseconds until the screensaver activates, invoke it with -t:

        $ xssstate -t
        599998
        $ sleep 10 && xssstate -t
        590052
    

    The way the utility does this is by using some functionality provided by a X11 library, wrapped in a handful of switch statements (they have their own neat little github-style source browser if you want to check out xssstate.c in its entirety):

        // ...
        info = XScreenSaverAllocInfo();
        XScreenSaverQueryInfo(dpy, DefaultRootWindow(dpy), info);
    
        if (showstate) {
        	switch(info->state) {
    	case ScreenSaverOn:
    		printf("on\n");
    		break;
    	case ScreenSaverOff:
    		printf("off\n");
    		break;
    	case ScreenSaverDisabled:
    		printf("disabled\n");
    		break;
    	}
        } else if (showtill) {
    	switch(info->state) {
    	case ScreenSaverOn:
    		printf("0\n");
    		break;
    	case ScreenSaverOff:
    		printf("%lu\n", info->til_or_since);
    		break;
    	case ScreenSaverDisabled:
    		printf("-1\n");
    		break;
    	}
        } else if (showidle) {
    	printf("%lu\n", info->idle);
        }
        // ...
    

    When I do these articles I like to show some practical real-life usage of the utility - in this case I decided to add a little timer to my xmobar showing how long my computer had been idle. To this I added a Run Com entry to my xmobarrc:

        -- also stick %xssstate% into the template
        Run Com "xssstate" [ "-t" ] "xssstate" 10,
    

    This ends up showing with something like the below - apologies for shaky-cam!

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1108746 2016-12-09T11:00:06Z 2016-12-09T11:00:06Z IronPython - Efficient string concatenation

    Previously I'd done a bit of fiddling around with Python, revisiting some string concatenation benchmarks from an old-ish article and trying to explain some unexpected results. After playing around a bit with IronPython I was curious whether it'd be faster or slower than CPython on windows.

    I installed the latest versions of both IronPython (2.7.5) and CPython (2.7.12) into my Windows 10 VM and re-ran the same set of tests.

    The most interesting thing I learned was that some changes to how memory was allocated for the new buffer caused the "naive" method to be on par with the optimised version. As it turns out, IronPython doesn't actually have this - so running stest.py we get the following:

        $ ipy64 stest.py 1
        method 1
        time 2406.60858154 ms
        output size  86 kb
    
        $ ipy64 stest.py 6
        method 6
        time 46.9284057617 ms
        output size  86 kb
    

    IronPython is a totally different beast to CPython, so my previous method of debugging - taking the code and examining it with the dis module doesn't yield anything useful:

    This is because it compiles it into a different set of instructions to be executed using the .NET CLR (it's important to note that it does not go directly to .NET IL, there's still a level of instructions above this similar to CPythons opcodes).

    However since we're on Windows with .NET we do have Visual Studio - which is arguably easier than working through python bytecode instructions in a text editor. To begin with it's extremely simple to find out where exactly we spend most of our execution time using dotTrace by JetBrains:


    So the program execution is split with roughly 50% spent in initialisation (ImportSite, again!) but that's not included in our benchmark, however the remaining 50% is spent in String.Concat() in mscorlib (source here) which is what we're interested in:

        [System.Security.SecuritySafeCritical]  // auto-generated
        public static String Concat(String str0, String str1) {
            Contract.Ensures(Contract.Result() != null);
            Contract.Ensures(Contract.Result().Length ==
                (str0 == null ? 0 : str0.Length) +
                (str1 == null ? 0 : str1.Length));
            Contract.EndContractBlock();
    
            if (IsNullOrEmpty(str0)) {
                if (IsNullOrEmpty(str1)) {
                    return String.Empty;
                }
                return str1;
            }
    
            if (IsNullOrEmpty(str1)) {
                return str0;
            }
    
            int str0Length = str0.Length;
            
            String result = FastAllocateString(str0Length + str1.Length);
            
            FillStringChecked(result, 0,          str0);
            FillStringChecked(result, str0Length, str1);
            
            return result;
        }
    

    This explains why things are so slow - when concatenating two strings there are no realloc-based tricks like CPython had - we allocate a new memory buffer every time, copy both strings into it, and let the garbage collector handle the old buffers.

    Sadly it's pretty non-trivial for someone like me to implement a similar optimisation here - since we depend on the underlying string implementation in .NET we're stuck with how string concatenation was implemented there. I toyed with the idea of re-writing a hacky reimplementation of FastAllocateString as FastReallocateString specifically for the += operator (it's possible to do - we need to change PythonBinaryOperationBinder.BindDelegate() to handle Add and AddAssign differently) this would've involved getting stuck into the mscorlib sources in coreclr - and I'd rather stay in Python-land for the time being.

    However since it's possible to access the .NET standard libraries from IronPython we can at least test how System.Text.StringBuilder performs, since it is designed to solve this very problem. So I setup the stest.py code I previously used, and re-ran them on my Windows 10 VM for both CPython 2.7.12 and IronPython 2.7.5. Just to quickly recap, here are the string concatenation methods I tested:

    Method 1: simple concatenation: s1 += s2

    Method 2: concatenation using MutableString (s1 += s2, where s1 is a MutableString)

    Method 3: appending to a long array of char

    Method 4: building a list of strings, then calling "".join()

    Method 5: writing to a cStringIO buffer in memory using write()

    Method 6: same as Method 4 but using a list comprehension inline

    Method 7: using System.Text.StringBuilder (IronPython only)

    runtime (ms) concatenations per second
    method 1 16.00 1,250,000
    method 2 108.99 183,503
    method 3 14.99 1,334,222
    method 4 3.00 6,666,666
    method 5 6.00 3,333,333
    method 6 2.00 10,000,000

    runtime (ms) concatenations per second
    method 1 2517.44 7,944
    method 2 3968.87 5,039
    method 3 25.39 787,711
    method 4 42.13 474,721
    method 5 35.56 562,429
    method 6 33.22 602,046
    method 7 22.43 891,662

    Conclusion

    So in IronPython the most efficient way to join strings together is to hook into .NET's System.Text library and use StringBuilder, no surprises there. What was surprising was how much slower IronPython was than CPython. I'm curious if this is just a freak result or if IronPython is known to be pretty slow. I'll probably not attempt to speed up the AddAssign op in IronPython, however I would like to investigate why IronPython is so slow, and if there are any plans to improve things. In addition I was surprised that the realloc trick had little-to-no effect on CPython in Windows (i.e. method 1 was slow even on 2.7.12).

    I am a little sick of this benchmark now - I might revisit it one final time to compare it across CPython, IronPython, PyPy and Pyjion to complete the picture, but only if I'm really bored :)

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1106896 2016-11-19T11:00:00Z 2016-11-19T11:00:01Z IronPython - tackling some unexpected Exceptions

    After I listened to an episode of Talk Python To Me featuring Alex Earl from the IronPython project I learned that not only is IronPython not dead/dying, but it's actually seeing a bit of a resurgence recently. I grabbed the sources from the IronLanguages GitHub, setup my dev environment, opened it up and launched the IronPythonConsole project hoping to see the familiar python REPL.

    However instead I saw that it had hit an exception:

    I was frustrated at first, thinking I'd done something wrong, but realised that getting to the bottom of an Exception was a fine way to introduce yourself to a new codebase.

    The exception itself is a ZipImportError with the text "not a Zip file" and is thrown in the constructor for zipimporter.

    Python Confession #1: I had never heard of or used zipimporter before.

    Since I'd never heard of the class before I had no idea why the IronPython runtime would be calling this and especially on something which didn't appear to exist. So it's time to dig through the call stack to see where this comes from:

    It appears that PythonCommandLine.ImportSite kicks this process off so that's where I started looking:

        private void ImportSite() {
            if (Options.SkipImportSite)
                return;
        
            try {
                Importer.ImportModule(PythonContext.SharedContext, null, "site", false, -1);
            } catch (Exception e) {
                Console.Write(Language.FormatException(e), Style.Error);
            }
        }
    

    It turns out that site is a special Python module which is imported by default when the interpreter starts (for all implementations - IronPython, JythonPyPy and good old vanilla CPython). It's responsible for doing some platform-specific module path setup.

    Python Confession #2: I had never heard of the site module before.

    So how does importing site cause us to execute some code relating to zipimporter? Searching through the call stack at the point of the Exception shows that all this seems to come from FindImporterForPath which takes every function in path_hooks and attempts to apply it to the path we're importing.

        /// 
        /// Finds a user defined importer for the given path or returns null if no importer
        /// handles this path.
        /// 
        private static object FindImporterForPath(CodeContext/*!*/ context, string dirname) {
            List pathHooks = PythonContext.GetContext(context).GetSystemStateValue("path_hooks") as List;
    
            foreach (object hook in (IEnumerable)pathHooks) {
                try {
                   object handler = PythonCalls.Call(context, hook, dirname);
    
                    if (handler != null) {
                        return handler;
                    }
                } catch (ImportException) {
                    // we can't handle the path
                }
            }
    

    So we call every path_hook with the module we're importing as an argument using PythonCalls.Call()The path_hooks themselves come from the sys module:

        sys.path_hooks

    A list of callables that take a path argument to try to create a finder for the path. If a finder can be created, it is to be returned by the callable, else raise ImportError.

    Python Confession #3: I had never heard of or used path_hooks before.

    So what is in path_hooks? If I keep hitting continue on the Visual Studio debugger the Exception is caught, I reach the python REPL and can inspect what is in sys.path_hooks:

    And there it is - zipimporter. Now we're approaching an explanation - when the IronPython interpreter is initialised it imports the site module which takes the everything in path_hooks and applies them to all the modules in our path - but since there are no .zip files anywhere in our path zipimporter (the only path hook) cannot find anything to operate on, so throws an exception which is normally caught and handled.

    So this is normal behaviour - the exception is even expected, since path_hooks' documentation states that if a path_hook fails it raises an Exception.

    Conclusion

    OK nothing special has happened here since IronPython is behaving exactly as it should, however unexpected it was to me. That said, this is a very nice way to learn some new Python concepts, like:

    1. the zipimport module
    2. the site module
    3. sys.path_hooks 

    I even have a half-working clone of zipimporter for tarballs called tgzimporter, but there's little need for such functionality as I suspect that even zipimporter is seldom used. 

    It would've been easy to just keep hitting the "F5" key until I hit the REPL, but then I would likely have struggled to find a way to approach the source code and perhaps would've put it off indefinitely. Hopefully now I'll find some way to continue to improve my understanding and contribute to the IronPython project.

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1048404 2016-11-11T11:00:04Z 2020-10-15T19:08:06Z Thinkpad X250 - SMS and GPS via Python

    When I was messing around trying to get the 4G modem working on my Thinkpad X250 for this post I ended up doing a little debugging by connecting to the modem over serial using cu:

        $ cu -h -l /dev/ttyACM0
        Connected.
        AT^M
        OK
    

    It turns out that the Sierra EM7345 modem in my Thinkpad can be controlled through AT commands sent over serial. Over at zukota.com there's a guy who's done a seriously good job of experimenting with this modem and documenting what was previously not very well documented.  

    For example if we wanted to use a SIM locked with the PIN "1234", then send the message "Hello, world!" to the Czech number 775123456, we would connect as above and enter the following:

        AT+CPIN="1234"^M
    
        OK
        AT+CMGS="+420775356278"^M
        > Hello, world!^Z^M
        +CMGS: 40
    
        OK
    

    Getting the GPS position involved issuing the XLCSLSR at command and parsing it's output. This is a little more complicated, since there's a slight delay in the response, and the response contains dozens of fields (I've included the request/response in its entirety below so you can see):

        AT+XLCSLSR=1,1,,,,,,,,,,^M
        +XLCSLSR: request id 2
    
        OK
    
        +XLCSLSR: 2, 49.195669 N, 16.606075 E, 119.996932, 48.743179, 39.616302,143, 100.997169,67,2016/05/10,18:38:22,0,1,75.45,2.28,-0.25,2.20,0.64,239919,239919.74,,,4.50,2.50,3.50,118.92,62.80,100.98,,,1,1896,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,4,73.04,70.04,21.00,1,,,,21 ,61.03,283.14,41.00,1,,,,16 ,20.01,173.09,30.00,1,,,,10 ,17.01,100.05,32.00,1,,,,29 
    
        OK
    

    I was delighted when I found this out, as I had recently discovered the pyserial library, and realised I could interact with the modem and explore/expose some of the modem's functionality via a Python library.

    em73xx

    I rolled up my sleeves and wrote a library called em73xx (PyPI, GitHub) which will issue the necessary AT commands to send and receive SMS messages, and retrieve the current location using the modem's GPS functionality.

    To install the library either retrieve from pypi using pip:

        $ pip install em73xx
    

    ... or you can clone the smcl/em73xx repo and install using setup.py:

        $ git clone https://github.com/smcl/py-em73xx
        $ cd py-em73xx
        $ python setup.py install
    

    Once it's installed you can use em73xx by importing the Modem class and instantiating it with the path to the serial device on your system (probably /dev/ttyACM0, as below) and optionally specify the PIN for the sim inserted:

        from em73xx import Modem
        modem = Modem("/dev/ttyACM0", pin="1234")
    

    To retrieve all the SMS messages on the SIM, and print them to stdout we can use the getSMS method:

        messages = modem.getSMS()
        for m in messages:
            print("from %s: % (m.sender))
            print("\t%s" % (m.message))
    

    If we want to send a message we can use the sendSMS method:

        # request an 60 minute tram ticket in brno!
        modem.sendSMS("90206", "BRNO")
    

    And finally if we want to retrieve the current GPS position, we can use the getGPS method - note that the modem can fail to get a GPS fix, and will return None if so:

        gps = modem.getGPS()
        if gps:
            print(gps.latitude)
            print(gps.longitude)
    

    xsms

    Ultimately I started this because I wanted to have a simple utility that integrated with xmobar which would inform me whether I had received a new SMS, and would allow me to reply to existing messages or compose an entirely new one. To achieve this I wrote xsms which is a utility that will either print to stdout the number of unread messages (for xmobar) or launch a GUI, depending on the command line switches used.

    Again you can either retrieve xsms from pypi using pip:

        $ pip install xsms
    

    ... or clone the smcl/xsms repo from github and install using setup.py:

        $ git clone https://github.com/smcl/xsms
        $ cd xsms
        $ python setup.py install
    

    Once xsms is installed you can either launch it standalone.

        $ python -m xsms --device=/dev/ttyACM0
    

    And if you want to have a little indicator in your xmobar you can use something like the below (which takes advantage of the ability to specify the font via tags to easily get some icons from Font Awesome):

        -- assumes you have Font Awesome installed and used here:
        -- additionalFonts = ["xft:FontAwesome-10"],
        Run Com "/usr/bin/python" [ "-m", "xsms", "-d", "/dev/ttyACM0", "-p", "1234", "-r", "", "-u", " %d" ] "xsms" 600,

    So when you add %sms% to your xmobarrc's template and restart you'll see something like this:

    ... and if you want to be able to click the icon to raise the GUI, you can surround it with an <action> which invokes xsms:

      template = "%StdinReader% }{ ... stuff ... <action=`python -m xsms -g -d /dev/ttyACM0 -p 1234`>%xsms%</action> ... "

    Conclusion

    This is an idea that sat half-finished in my ~/dev folder for about 6 months, and I'm really glad that I was able to take it from a single hacky one-shot script to two fully-fledged packages released on PyPI and ready to be used. There is still some additional functionality I'd like to add to em73xx, for example I only grab longitude/latitude from the GPS data and I'd like to be able to reset or power-cycle the modem in case of problems, however it's in pretty good shape overall.

    As for xsms, it's the second Tkinter application I've put together, and while I'm finding it easier each time I write something (with fewer cut-paste lines from effbot.org's Tkinter docs) it's starting to be a little unwieldy. For example the ttk module allows you to add widgets to your application with a unified theme, but it's missing a multi-line text entry widget - which is something I use for inputting and displaying the SMS messages. This meant that I had to either decide to put off styling xsms or add some hacks to customise the Tkinter.Text widget that I ultimately used. In addition programatically constructing the UI using the grid() and pack() layout managers feels a little like creating a webpage laid out using nested <table> elements in the late 90's. Ultimately if I find myself writing a Python desktop app in future I'll spend a little more time investigating the frameworks available and weighing them up against using Tkinter, now that I'm broadly familiar with it.

    Useful Links

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1084231 2016-09-09T10:00:05Z 2020-10-15T19:08:18Z Discover a Linux Utility - jjs

    To learn more about Debian and Linux in general I'm selecting utilities at random from my PATH using the command below, learning what they do and writing a blog post about it. Previously: Part 1, Part 2

        $ (for folder in `echo $PATH | sed "s/:/\\n/g"`; do ls -1 $folder; done; ) | shuf -n 1 | xargs man
    

    The random command I'm looking at this time is jjs - whose summary is simply "Invokes the Nashorn engine" which is a little vague and as it turns out is underselling things slightly. Nashorn is a Javascript engine written in Java for the Java VM with very neat integration and access to the range of libraries and functionality provided by the JDK.

    While I'm not a Java or Javascript developer by trade I am surprised that I had never seen this pop up on Hacker News or lobste.rs before, and I'm sure many professional Java devs aren't particularly familiar with it either. I was even more surprised how quick it was to get productive (the man page suggests using the println function, which doesn't exist) since my hazy memories from using Java at university involved fiddling around with the CLASSPATH env variable and launching things in weird ways. 

    Entering jjs takes you to a REPL prompt where you can muck around with javascript to your heart's content while you get familiar - here you'll see what I mean about the example in the manpage, println should be print:
        $ jjs
        jjs> println("hello, world!")
        :1 ReferenceError: "println" is not defined
        jjs> print("hello, world!") 
        hello, world!
        jjs> function add(x, y) { return x + y }
        function add(x, y) { return x + y }
        jjs> add(10, 20)
        30
        jjs> function fib(n) { if (n < 1) { return 0 } else if (n <= 2) { return 1 } else { return fib (n - 1) + fib (n - 2)} }
        function fib(n) { if (n < 1) { return 0 } else if (n <= 2) { return 1 } else { return fib (n - 1) + fib (n - 2)} }
        jjs> fib(3)
        2
        jjs> fib(50)
        12586269025
        jjs>
    
    What I really like is that there's a fuss-free interface to the entire JDK through an object conveniently called java:
        jjs> var str = new java.lang.String("Hello, world!")
        jjs> java.lang.System.out.println(str)
        Hello, world!
        jjs> var dict = new java.util.HashMap()
        jjs> dict.put("foo", "bar")
        null
        jjs> dict.put("baf", "baz")
        null
        jjs> dict.forEach(function(k,v) { print(k + v); }) 
        bafbaz
        foobar
    
    That final line pretty much sums up why I think this is cool - having created an instance of a java.util.ashMap we iterate over it using its forEach method but we can give it a javascript lambda function as an argument to apply to each key/value pair. There's no denying that this is neat :)

    I wanted to do a little measurement between the V8 compiler used by Chrome and Node.js but it turns out this has already been done: http://blog.nidi.guru/tech/java/2015/10/05/graphviz-with-java/


    So Nashorn is actually a good deal slower than V8 (and that's with a cold VM, warmed up the difference is more stark) - which isn't a huge problem I suppose unless you're doing some really heavy lifting using Nashorn. I don't think many people are.

    My previous "Discover ..." posts generally had to lay a bit of groundwork to introduce various OS concepts before the utility could be understood. However since Java and Javascript are both pretty commonplace there's no introduction needed, and the best way to show it off and understand what it's capable of is to write some code.

    So I implemented four demo programs which are small enough to comprehend in a minute or so, explore the interesting relationship between js/JDK and demonstrates some relatively common use-cases for programming languages:
    1. shell scripting
    2. unix system utilities
    3. visualisation
    4. web services

    Demo 1 - a simple shell script

    I'd recently read a blog post at IBM developerWorks tracing this history of UNIX shells (csh, tcsh, bash, etc) and implementing the same task in each one, I figured that this was as good a task as any to start with. I implemented this as findexec.js below:


    The code is a little less elegant than the bash version, we rely pretty heavily on java.io.File to accomplish some of the things built into bash, but realistically we're using the wrong tool for the job here.
        $ jjs findexec.js -- /usr/local/bin
        /usr/local/bin/fsv
        /usr/local/bin/n-m
        /usr/local/bin/behave
        /usr/local/bin/tzupdate
        /usr/local/bin/dotnet
    
    

    Demo 2 - a unix-y utility

    The next program I wrote was a unix-style system utility that reverses the order of its input, which is either
    1. a list of lines read in from stdin (piped or input)
    2. a list of files supplied as arguments, which are each written reversed
    This was a little more fun to write - couple of interesting things here. First was ability to use generics - could just create new ArrayList without specifying the type. Second was polymorphism between a Java stdlib class BufferedReader and a javascript class I wrote MultipleFileReader which both happen to implement a method readLine() but which don't explicitly implement any common interface or abstract class. 

    I implemented this as filerev.js, which is a wee bit long for this blog but can be found at this Gist on GitHub. Below is a little snippet showing its usage:
        $ cat > foo << EOF
        > herp
        > derp
        > blorp
        > EOF
        $ jjs filerev.js -- foo
        blorp
        derp
        herp
        $ cat foo | jjs filerev.js 
        blorp
        derp
        herp

    Demo 3 - a JavaFX utility

    In the manpage I noticed the -fx which "launches the script as a JavaFX application". I hadn't used Java since university so I had no clue what JavaFX was, but it's apparently a set of libraries for writing graphical applications in Java, and mostly replaces Swing/AWT toolkits among other things.

    After I read a bunch of documentation and puddled around a bit I decided that I wanted an application which can quickly produce little line graphs from input piped via stdin (and dynamically resizes according to the max/min values in the dataset).

    This was a little trickier than the previous two examples but after finding the documentation on extending abstract classes (for the AnimationTimer) the rest was surprisingly straight forward. I created a file containing a a few repetitions of a sine wave, and piped it in to generate the visualisation below:


    Again, the code is a Gist on GitHub as plotstdin.js along with the input data (and the code that generated it).

    Demo 4 - a web service

    For my final example I wanted to spin up a quick and simple web service, however this apparently not so straight-forward in Java. While the JDK has plenty of libraries available, there's no equivalent to SimpleHttpServer in Python or http.createServer() in Node.js - it seems that you need to use a third party package like Tomcat or Jetty. This presents a bit of a problem, since I want to create a little self-contained example without having to resort to resolve any dependencies using Gradle and Maven, which I'm unfamiliar with and would be tricky to work with using Nashorn.

    However I found a nice project on GitHub which handles most of this called Nasven which wraps Maven and let's me easily add a dependency for Spark (lightweight web framework) and launch it.

    I created a repo called thumbooo which will spin up a simple web service running on port 8080, with a single POST resource that produces < 100 x 100 pixel thumbnails for submitted images.

    While Nasven handled the toughest part, I still ran into trouble since Nashorn was unable to determine which of the three ImageIO.write() functions it should call since the image resizing produced an object of type ToolkitImage and it expected a RenderedImage

    You can ran the following to start the server:

        $ git clone https://github.com/smcl/thumbooo
        Cloning into 'thumbooo'...
        remote: Counting objects: 42, done.
        remote: Compressing objects: 100% (37/37), done.
        remote: Total 42 (delta 18), reused 17 (delta 4), pack-reused 0
        Unpacking objects: 100% (42/42), done.
        Checking connectivity... done.
        $ cd thumbooo
        $ jjs -scripting nasven.js -- src
    

    and in another xterm use cURL to POST an image to the /thumb endpoint:

        $ curl -L -o thumb.png --form "file=@lenna.png" http://localhost:8080/thumb
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
        100  494k    0 32358  100  462k   638k  9353k --:--:-- --:--:-- --:--:-- 9447k
    
    
    This resized the original 512x512 Lenna image ...


    ... to a 100x100 thumbnail:

     

    Conclusion

    I'm impressed with the integration between Javascript and Java that Nashorn provides, as demonstrated by the snippets above it's relatively flexible and easy to use (if a tad slow!). As a technical exercise it's very impressive, however I'm still not 100% sure what it was originally meant to be used for. Perhaps as a non-js/Java developer I'm missing something, but I feel like there are already better options available for individual hackers all the way to Enterprise-scale projects. I certainly had fun using it though!

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1077425 2016-08-12T10:00:00Z 2020-10-15T19:08:26Z Discover a Linux Utility - slabtop

    To learn more about Debian and Linux in general I'm selecting utilities at random from my PATH using the command below, learning what they do and writing a blog post about it. Previously: Part 1

        $ (for folder in `echo $PATH | sed "s/:/\\n/g"`; do ls -1 $folder; done; ) | shuf -n 1 | xargs man
    

    Today's randomly selected utility is called slabtop - according to the man page it should "display kernel slab cache information in real time". I was pretty pleased about this one actually, I had no idea about slab allocators so it was good to have something to explore on a stormy day:

    Put simply, Linux permits us to manage and allocate chunks of memory from a set of fixed-size buffers - called slabs - for different purposes. And slaptop gives us a view into the state of each slab cache: what they're called, how many elements are allocated in total and how full each slab is:

    So for the highlighted row, we can know the following information:

    - name ("kmalloc-32")

    - number of slabs allocated (163)

    - number of objects allocated (20007)

    - percentage of slab space allocated (98%)

    - total size of the slab space allocated (652K)

    ... and so on.

    What is slab allocation

    So what does all this mean and why do we need it? During the execution of some program we may have requested and released  lots of differently sized objects from the heap. This could result in a pretty fragmented heap.

    This could mean that allocating new buffers is either a little slow or unpredictable, since we need to search for an appropriately sized buffer before we can mark it as allocated and return it to the calling code. If any part of our application depends on a particular object being allocated quickly and predictably this is not ideal. 

    Slab allocation involves setting up a slab which can hold certain amount of objects of the same size and type. Since we're dealing with fixed-size objects allocating space for a new object is quick - we just need to scan through an array to find an unused slot, mark it as used, then calculate the address (start_addr + size * index) and return it.

    We could roll our own implementation since it's pretty straightforward to implement - and that would be quite an interesting project for an evening or so. However if we're writing Linux kernel or module code there's already an existing set of functions which are well understood and battle-hardened. In addition these functions hide a lot of underlying complexity by abstracting a lot of the memory management (we can actually have numerous slabs, but Linux manages this for us).

    How Linux supports slab allocation

    To start with, if we want to create a new slab cache (a managed collection of slabs) we can use kmem_cache_create:

        struct kmem_cache *kmem_cache_create(
    	    const char *,      // name of cache
    	    size_t,            // size of each object
    	    size_t,            // object alignment
    	    unsigned long,     // any special flags
    	    void (*)(void *)   // slab constructor
        );
    

    To allocate a new object from this cache we can use kmem_cache_alloc:

        void *kmem_cache_alloc( 
        	struct kmem_cache *, // ptr to the cache 
        	gfp_t flags          // flags to use if we need new slab
        );
    

    Then when we need to free any, kmem_cache_free which will mark the space as unused and make it available to the allocator:

        void kmem_cache_free(
        	struct kmem_cache *, // ptr to the cache
        	void *               // ptr to object to free
        );
    

    And finally when we're done and we want to remove the cache entirely there is kmem_cache_destroy which will free and release all the slabs related to a given cache:

    	void kmem_cache_destroy(
    		struct kmem_cache *  // ptr to the cache
    	);
    

    How to use slab allocation

    As an example of how this could be used we can think about it in the context of a simple kernel module. I'll roughly go over the individual parts, and then present a full-blown module to explore. Firstly we'll want to defined the type we want to create the slab cache for  - imaginatively named slabthing - and use it to declare a variable s:

        typedef struct
        {
          char foo; 
          char bar; 
          char baz;
        } slabthing;
    
        slabthing *s;
    

    Inside our modules init_module we can use kmem_cache_create - so that our slab cache will be called "slabdemo", it'll allocate objects with the required size. Since I don't really care much about performance (specifying an appropriate alignment could permit us to use different/faster load instructions) so we'll just request it to be single byte-aligned.

        struct kmem_cache *slabthing_cache;
    
        int init_module(void)
        {
          slabthing_cache = kmem_cache_create("slabdemo", sizeof(slabthing), 1, NULL, NULL);
          // other module stuff...
        }
    

    If our module wants to allocate space for the variable s in our cache we can call kmem_cache_alloc:

        s = kmem_cache_alloc(slabthing_cache, NULL);
    

    When we want to free the cache, in this case in our module cleanup code, we can call kmem_cache_free and kmem_cache_destroy. I don't think the free is necessary in this case, but I've just included it anyway:

        void cleanup_module(void) {
            kmem_cache_free(slabthing_cache, s);
            kmem_cache_destroy(slabthing_cache);
         }
    

    To see this in action I've created a little demo module that creates a cache and then allocates 128 objects when it's loaded - then frees all the objects and destroys them when it's unloaded. You'll need to make sure you have the sources checked out for your current kernel inside /usr/src/linux-<version>:

        $ git clone http://github.com/smcl/slabdemo
        $ cd slabdemo
        $ make 
    

    If our module builds successfully we can load it using insmod and then fire up slabtop to see how things look:

        $ sudo insmod slabdemo.ko
        $ sudo slabtop

    So there it is - our slabdemo cache is there, with 128 objects allocated in a single slab which can fit 240 objects in total (peculiar number!). If you check out slabtop you can easily see things like kmalloc-512, kmalloc-1024 and kmalloc-2048 which are probably slab caches used by kmalloc for allocating memory pools of 512, 1024 and 2048 bytes respectively. In any case, that's another issue for another day. Obviously this is a tool designed for kernel developers, or for sysadmins of particularly performance-critical sysadmins, but I'm glad I took the time to potter around with it.
    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1073706 2016-08-05T10:00:01Z 2020-06-23T21:24:09Z Python - Efficient String Concatenation in Python (2016 edition)

    After working through Philip Guo's excellent series of lectures on the CPython VM I found myself looking at the performance of string concatenation. Strings are immutable in python - so they cannot be modified in-place, you have to create a new string. This also affects string concatenation, so if we want to tack one string onto the end of another we might have some code like this:

        x = "hello"
        x += ", world!"
    

    When the CPython interpreter is running the above code, it's doing something conceptually similar to the following:

    1. allocate a buffer for the string "hello" 6 bytes => len("hello") + 1, and copy the string into it
    2. make x reference this
    3. allocate a new buffer with length 14 bytes => len("hello") + len(", world") + 1
    4. copy "hello", then ", world" into this new buffer, and make x reference it
    5. decrement reference on the old string, possibly free it

    Note even though Python is unlike C in that it doesn't rely on a trailing NULL to terminate strings (we store each string's length), it does by convention still null-terminate them - which is where the extra "+ 1" byte comes from.

    Anyway, if we have to repeatedly join strings together we have to be careful how we do it, just in case we accidentally introduce some slow, inefficient code. This has been long understood in the python community, for instance here is an article from 2004 which explored different methods of concatenating strings, and compared their performance. 

    I was curious if there was any difference on how things shaped up on modern hardware (the test used an old 400 MHz Celeron) and a newer version of Python (this used python 2.2.1, the latest version of python 2 is 2.7.12) - so I grabbed the source code and started playing around.

    It needed a tiny bit of tweaking however - the timing module it uses doesn't seem exist in the standard library and current version in pypi isn't compatible. I modified it to use the time module, the source code is here on GitHub if you're interested: stest-new.py

    As a quick recap, there are 6 methods of concatenating strings which we're putting under the microscope:

    Method 1: simple concatenation: s1 += s2

    Method 2: concatenation using MutableString (s1 += s2, where s1 is a MutableString)

    Method 3: appending to a long array of char

    Method 4: building a list of strings, then calling "".join()

    Method 5: writing to a cStringIO buffer in memory using write()

    Method 6: same as Method 4 but using a list comprehension inline

    The original tests were performed using Python 2.2.1, for comparison's sake I've re-run them on my computer just to see:

        $ for i in {1..6}; do python stest.py $i; done

    The results for Python 2.2.1 are below:

    runtime (ms)  concatenations per second 
    Method 1  55.11 362,910
    Method 2  74.67 267,852
    Method 3  10.80 1,851,337
    Method 4  6.21 3,220,611
    Method 5  8.11 2,467,612
    Method 6  5.42 3,694,808

    So in Python 2.2.1 the list comprehension method was the fastest by a pretty clear margin. However when I re-ran using 2.7.12 things turned out very differently:

    runtime (ms) concatenations per second
    Method 1 1.7995 11,113,977
    Method 2 90.1073 221,957
    Method 3 3.9557 5,055,967
    Method 4 2.1804 9,172,689
    Method 5 4.8047 4,162,585
    Method 6 1.4191 14,093,289

    In the time since 2.2.1 the performance of the naïve string concatenation method has improved hugely, it's now it's the fastest method (note: this is using a Python interpreter I built, using the Python 2,7.12 package that comes with Debian it's actually the fastest). This is surprising, since I thought it was relatively well-established and understood that it was slow, or just not ... pythonic. I was curious exactly why Method 6 was now slower than Method 1, so I disassembled the functions using the dis module.

    There were a number of SET_LINENO instructions in the 2.2.1 version which I've not shown - it makes the disassembly a little easier to read and the performance impact would have been negligible - when tracing is turned off (which it is) all this instruction did was set the current frame's f_lineno and continue executing the next instruction.

    Disassembly - Method 1 

    Python 2.2.1 Python 2.7.12
     6 LOAD_CONST               1 ('')
     9 STORE_FAST               0 (out_str)
    15 SETUP_LOOP              37 (to 55)
    18 LOAD_GLOBAL              1 (xrange)
    21 LOAD_GLOBAL              2 (loop_count)
    24 CALL_FUNCTION            1
    27 GET_ITER            
    31 FOR_ITER                20 (to 54)
    34 STORE_FAST               1 (num)
    40 LOAD_FAST                0 (out_str)
    43 LOAD_FAST                1 (num)
    46 UNARY_CONVERT       
    47 INPLACE_ADD         
    48 STORE_FAST               0 (out_str)
    51 JUMP_ABSOLUTE           28
    54 POP_BLOCK           
    58 LOAD_FAST                0 (out_str)
    61 RETURN_VALUE              
    
     0 LOAD_CONST               1 ('')
     3 STORE_FAST               0 (out_str)
     6 SETUP_LOOP              31 (to 40)
     9 LOAD_GLOBAL              0 (xrange)
    12 LOAD_GLOBAL              1 (loop_count)
    15 CALL_FUNCTION            1
    18 GET_ITER            
    19 FOR_ITER                17 (to 39)
    22 STORE_FAST               1 (num)
    25 LOAD_FAST                0 (out_str)
    28 LOAD_FAST                1 (num)
    31 UNARY_CONVERT       
    32 INPLACE_ADD         
    33 STORE_FAST               0 (out_str)
    36 JUMP_ABSOLUTE           19
    39 POP_BLOCK           
    40 LOAD_FAST                0 (out_str)
    43 RETURN_VALUE             
    So pretty much identical, which I half-expected - even though we're looking at two versions of Python which were released 14 years apart the compiler doesn't seem to get a great deal of changes, so the generated bytecode shouldn't change much for simple code. 

    Disassembly - Method 6

    Python 2.2.1 Python 2.7.12
     6 LOAD_CONST               1 ('') 
     9 LOAD_ATTR                0 (join)
    12 BUILD_LIST               0
    15 DUP_TOP             
    16 LOAD_ATTR                1 (append)
    19 STORE_FAST               0 (_[1])
    22 LOAD_GLOBAL              3 (xrange)
    25 LOAD_GLOBAL              4 (loop_count)
    28 CALL_FUNCTION            1
    31 GET_ITER            
    35 FOR_ITER                17 (to 55)
    38 STORE_FAST               2 (num)
    41 LOAD_FAST                0 (_[1])
    44 LOAD_FAST                2 (num)
    47 UNARY_CONVERT       
    48 CALL_FUNCTION            1
    51 POP_TOP             
    52 JUMP_ABSOLUTE           32
    55 DELETE_FAST              0 (_[1])
    58 CALL_FUNCTION            1
    61 STORE_FAST               1 (out_str)
    67 LOAD_FAST                1 (out_str)
    70 RETURN_VALUE        
    
     0 LOAD_CONST               1 ('')
     3 LOAD_ATTR                0 (join)
     6 BUILD_LIST               0
     9 LOAD_GLOBAL              1 (xrange)
    12 LOAD_GLOBAL              2 (loop_count)
    15 CALL_FUNCTION            1
    18 GET_ITER            
    19 FOR_ITER                13 (to 35)
    22 STORE_FAST               0 (num)
    25 LOAD_FAST                0 (num)
    28 UNARY_CONVERT       
    29 LIST_APPEND              2
    32 JUMP_ABSOLUTE           19
    35 CALL_FUNCTION            1
    38 STORE_FAST               1 (out_str)
    
    41 LOAD_FAST                1 (out_str)
    44 RETURN_VALUE        

    There are slightly more differences when comparing how 2.2.1 and 2.7.12 generate the bytecode for the list comprehension method. However, aside from a couple of quirks in the 2.2.1 version (i'm not sure why we call DUP_TOP on the list we created, and I have no idea what _[1] is) much of the bytecode is broadly the same - we produce a list of integers by applying CALL_FUNCTION to xrange with argument loop_count, and then iterate over the results, calling UNARY_CONVERT on each and assembling either a list or string using INPLACE_ADD or LIST_APPEND

    Since the generated bytecode contains no substantial differences, if we want to understand why the naive concatenation method (which uses INPLACE_ADD) became super-fast over the last 14 years we'll need to inspect how the Python VM interprets this code.

    Analysis - INPLACE_ADD in Python 2.2.1

    To save some space and time I'll skip straight to the meat and bones of where the actual string concatenation occurs - which is in Objects/stringobject.c, in the string_concat() function. It's quite small, so I've incuded it below - but I've stripped some macros for easier reading:

    static PyObject *
    string_concat(register PyStringObject *a, register PyObject *bb)
    {
            register unsigned int size;
            register PyStringObject *op;
            if (!PyString_Check(bb)) {
                    PyErr_Format(PyExc_TypeError,
                                 "cannot concatenate 'str' and '%.200s' objects",
                                 bb->ob_type->tp_name);
                    return NULL;
            }
    #define b ((PyStringObject *)bb)
            /* Optimize cases with empty left or right operand */
            if ((a->ob_size == 0 || b->ob_size == 0) &&
                PyString_CheckExact(a) && PyString_CheckExact(b)) {
                    if (a->ob_size == 0) {
                            Py_INCREF(bb);
                            return bb;
                    }
                    Py_INCREF(a);
                    return (PyObject *)a;
            }
            size = a->ob_size + b->ob_size;
            /* PyObject_NewVar is inlined */
            op = (PyStringObject *)
                    PyObject_MALLOC(sizeof(PyStringObject) + size * sizeof(char));
            if (op == NULL)
                    return PyErr_NoMemory();
            PyObject_INIT_VAR(op, &PyString_Type, size);
            memcpy(op->ob_sval, a->ob_sval, (int) a->ob_size);
            memcpy(op->ob_sval + a->ob_size, b->ob_sval, (int) b->ob_size);
            op->ob_sval[size] = '\0';
            return (PyObject *) op;
    #undef b
    }
    

    This is pretty much exactly the algorithm I described in my first paragraph - after a couple of checks to ensure we're definitely  dealing with strings we malloc a new buffer and memcpy both strings into it.

    Analysis - INPLACE_ADD in Python 2.7.12

    Again I'll skip straight to where the concatenation occurs - which is in Python/ceval.c in the string_concatenate() function: 

    static PyObject *
    string_concatenate(PyObject *v, PyObject *w,
                       PyFrameObject *f, unsigned char *next_instr)
    {
        /* This function implements 'variable += expr' when both arguments                         
           are strings. */
        Py_ssize_t v_len = PyString_GET_SIZE(v);
        Py_ssize_t w_len = PyString_GET_SIZE(w);
        Py_ssize_t new_len = v_len + w_len;
        if (new_len < 0) {
            PyErr_SetString(PyExc_OverflowError,
                            "strings are too large to concat");
            return NULL;
        }
    
        if (v->ob_refcnt == 2) {
            /* In the common case, there are 2 references to the value                             
             * stored in 'variable' when the += is performed: one on the                           
             * value stack (in 'v') and one still stored in the                                    
             * 'variable'.  We try to delete the variable now to reduce                            
             * the refcnt to 1.                                                                    
             */
            switch (*next_instr) {
            case STORE_FAST:
            {
                int oparg = PEEKARG();
                PyObject **fastlocals = f->f_localsplus;
                if (GETLOCAL(oparg) == v)
                    SETLOCAL(oparg, NULL);
                break;
            }
            case STORE_DEREF:
            {
                PyObject **freevars = (f->f_localsplus +
                                       f->f_code->co_nlocals);
                PyObject *c = freevars[PEEKARG()];
                if (PyCell_GET(c) == v)
                    PyCell_Set(c, NULL);
                break;
            }
            case STORE_NAME:
            {
                PyObject *names = f->f_code->co_names;
                PyObject *name = GETITEM(names, PEEKARG());
                PyObject *locals = f->f_locals;
                if (PyDict_CheckExact(locals) &&
                    PyDict_GetItem(locals, name) == v) {
                    if (PyDict_DelItem(locals, name) != 0) {
                        PyErr_Clear();
                    }
                }
                break;
            }
            }
        }
    
        if (v->ob_refcnt == 1 && !PyString_CHECK_INTERNED(v)) {
            /* Now we own the last reference to 'v', so we can resize it                           
             * in-place.                                                                           
             */
            if (_PyString_Resize(&v, new_len) != 0) {
                /* XXX if _PyString_Resize() fails, 'v' has been                                   
                 * deallocated so it cannot be put back into                                       
                 * 'variable'.  The MemoryError is raised when there                               
                 * is no value in 'variable', which might (very                                    
                 * remotely) be a cause of incompatibilities.                                      
                 */
                return NULL;
            }
            /* copy 'w' into the newly allocated area of 'v' */
            memcpy(PyString_AS_STRING(v) + v_len,
                   PyString_AS_STRING(w), w_len);
            return v;
        }
        else {
            /* When in-place resizing is not an option. */
            PyString_Concat(&v, w);
            return v;
        }
    }
    

    This one is a little more complex because there are a couple of neat optimisations. 

    Firstly, if the next instruction to be executed is one of a handful (STORE_FAST, STORE_DEREF, STORE_NAME) we save ourselves a few cycles by doing a little setup ahead of time.

    However more importantly, if there's only a single reference to the destination string we attempt to resize it in-place using PyString_Resize() instead of allocating an entirely new buffer. We can check this is the case by forcing this condition to be false:

        //if (v->ob_refcnt == 1 && !PyString_CHECK_INTERNED(v)) {                                  
        if (0) {
            /* Now we own the last reference to 'v', so we can resize it                           
             * in-place.                                                                           
             */
    

    If we build a Python 2.7.12 compiler without this resize optimisation, and retest:

    runtime (ms) concatenations per second
    Method 1 48.7949 409,878
    Method 2 89.2184 224,168
    Method 3 3.9204 5,101,535
    Method 4 2.1489 9,307,231
    Method 5 4.8215 4,148,073
    Method 6 1.4203 14,081,697

    We're right back where we started, with Method 1 being the second-slowest and the list comprehension used in Method 6 outperforming everyone else. 

    Conclusion

    In the intervening years since the original benchmark was performed, an optimisation has been introduced to improve how the CPython VM handles string concatenation. In the context of this specific benchmark it means that if you need to concatenate strings you can go with a naive, slightly more intuitive implementation and performance very close to the recommended, more-pythonic implementation. 

    It's very likely that this is not applicable in all cases, I'm sure that if we have a number of objects allocated on the heap then it'll get slower very quickly (we won't be able to extend that buffer every time, and will need to find new locations) - however it's still an interesting and surprising result nonetheless.


    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1070787 2016-07-15T10:00:04Z 2016-08-03T22:08:09Z Discover a Linux Utility - ischroot

    When I open up dmenu and start typing out the program I want to open it provides some autocomplete suggestions based on the programs in my PATH.

    I realised that there are hundreds of these, most of which I have never heard of before in my life. I realised that without a concerted effort I'd actually never end up learning anything about most of them - so I wrote a quick one liner to choose a random utility from my PATH and open up the man page:

        $ (for folder in `echo $PATH | sed "s/:/\\n/g"`; do ls -1 $folder; done; ) | shuf -n 1 | xargs man
    How this works is basically splitting my PATH into new lines using sed, then listing the contents using one calling ls -1 and selecting one from the whole lot using shuf and attempting to open its manpage using man.

    The first one I picked up was ...

    Oh nice - ischroot. Well as the manpage says, this detects if we're currently running in a chroot. OK another quick mystery solved...

    However that's doesn't exactly help us understand the whole picture. So let's start by asking "what is a chroot"? Well say we have a system that has a pretty straightforward set of folders in its root directory like the following

        $ echo $PATH
        /usr/bin:/bin
        $ ls /
        bin  boot  dev  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  vmlinuz
    

    So when we run a command like 'ls' it'll look for the utility in the two folders in our PATH (/bin and /usr/bin), then execute it on root directory of our file system. If we want however we can do a little trickery, and use the utility chroot so that when ls runs it sees an entirely different root altogether.

    To try this we'll need to prepare our fake root directory:

        $ sudo mkdir /opt/fakeroot
        $ sudo debootstrap --arch i386 jessie /opt/fakeroot http://httpredir.debian.org/debian
        I: Retrieving Release 
        I: Retrieving Release.gpg 
        I: Checking Release signature
        I: Valid Release signature (key id 75DDC3C4A499F1A18CB5F3C8CBF8D6FD518E17E1)
        ... many lines later ...
        I: Configuring tasksel...
        I: Configuring tasksel-data...
        I: Configuring libc-bin...
        I: Configuring systemd...
        I: Base system installed successfully.
        $ ls /
        bin  boot  dev  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  vmlinuz
        $ ls /opt/fakeroot/
        bin  boot  dev  etc  home  lib  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
    

    OK so we now have a similar looking set of folders in /opt/fakeroot that we'd have in a freshly built Debian Jessie system - however we can run ls / using the chroot command so that it sees /opt/fakeroot as its root directory:

        $ sudo chroot /opt/fakeroot ls /
        bin  boot  dev	etc  home  lib	media  mnt  opt  proc  root  run  sbin	srv  sys  tmp  usr  var
    

    OK nice, I can trick a process I've launched into thinking that any directory is root. However if I am launching a process like ls using chroot then surely I have no need for a tool like ischroot? Well the painful truth is that you may be running in a chroot "jail" without even knowing it. Some crafty (or security conscious) sysadmin may have set the system up so that, for example, when you connect via SSH your session is run inside a chroot. So assuming you've been entrusted with sudo priviledges, you can run ischroot and you'll be able to find out whether you are or not.

    So just to replay the man page information, the return value of running ischroot should indicate whether we're running inside a chroot or not - the possible return values are:

    value meaning
    0 running in a chroot
    1 not running in a chroot
    2 error occurred

    So to test this out, let's run ischroot outside a chroot jail ...

        root@hanoi:/opt# ischroot
        root@hanoi:/opt# echo $?
        1

    Cool, so that is exactly as expected. So if we switch over to our newly created chroot and run the same command...

        root@hanoi:/opt# sudo chroot /opt/fakeroot
        root@hanoi:/# ischroot
        root@hanoi:/# echo $?
        2
    

    Ah, so instead of returning 1 we got 2 - indicating an error. After a bit of messing around it turns out we have just hit debian bug #685034 -" debianutils: ischroot fails to detect if it is running in a chroot". However we can determine if we definitely are not running in a chroot, but we have a bit of difficulty determining whether or not we are - a return value of 2 could be thought of as a "maybe". Since we don't like "maybe" in some cases, we can force ischroot to interpret "maybe" as "yes" using the -t flag:

        root@hanoi:/# ischroot -t
        root@hanoi:/# echo $?
        0
    

    So not exactly a great start! However I am glad I learned about creating a chroot jail, even if ischroot isn't working as expected. Hopefully the next utility I pick up works a little better (and involves a smaller writeup).

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1067335 2016-07-08T10:00:02Z 2020-08-30T18:35:37Z Python - using a DualShock 4 with pygame

    The Playstation 4 came out in 2013 and came with the DualShock 4 controller. I bought one in the midst of a particularly tough hangover, and when I received it two days later the part that most intrigued me was the controller. It's quite an amazing piece of equipment - I originally wanted to claim that the controller alone sported higher specs than the original PSX, and while the CPU is beefier (ARM Cortex M3 @ 160MHz vs MIPS R3K @ 33.8MHz) however the controller has a hundred or so Kb of RAM versus the PSX' 2MB.

    That said the interfaces are pretty cool - in addition to the standard d-pad and X, O, triangle + square  buttons there's a couple of analog sticks, shoulder buttons (2 x analogue) and a handful of accelerometers for motion sensing. All in all a nice little controller, no wonder mine needs charged every couple of days.

    I thought that it'd be nice to play with some of these sensors and write OpenGL code in python. So I put together a quick demo - a 3D cube which is controlled by the accelerometers. 

    If we want to read events from the controller we can use pygame which can handle a lot of the fiddly parts (preventing the need for us to mess with something like evdev directly) - assuming we only have only one controller connected we can do this:

        import pygame
        controller = pygame.joystick.Joystick(0)
        controller.init()
    

    What we'll do is repeatedly look for events, and then save the data we've read back into a couple of dictionaries - this roughly looks like the below:

        axis = {}
        button = {}
    
    # these are the identifiers for the PS4's accelerometers AXIS_X = 3 AXIS_Y = 4
    # variables we'll store the rotations in, initialised to zero rot_x = 0.0 rot_y = 0.0 # main loop while True: # copy rot_x/rot_y into axis[] in case we don't read any axis[AXIS_X] = rot_x axis[AXIS_Y] = rot_y # retrieve any events ... for event in pygame.event.get(): if event.type == pygame.JOYAXISMOTION: axis[event.axis] = round(event.value,2) elif event.type == pygame.JOYBUTTONDOWN: button[event.button] = True elif event.type == pygame.JOYBUTTONUP: button[event.button] = False rot_x = axis[AXIS_X] rot_y = axis[AXIS_Y] # do something with this ...

    It's important to note that we don't necessarily receive updated values each time we call pygame.event.get() - so we need to be careful about how we handle this.

    Now that we've got some rotation values from the controller, we can use them to render and rotate a simple cube on screen:

        glPushMatrix()   
        glRotatef(roty_scale * rot_y, 1, 0, 0)
        glRotatef(rotx_scale * rot_x, 0, 0, 1)
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
        glBegin(GL_LINES)
        for edge in edges:
            for vertex in edge:
                glColor3fv((1,1,1))
                glVertex3fv(vertices[vertex])
        glEnd()
        glPopMatrix()

    However I could only see accelerometers for two axes - to perform the rotation through the third axis I wanted to use the L2 and R2 shoulder buttons. However these really are a funny beast. After a bit of experimenting I noticed that there's some extremely weird quirk. Joystick input using pygame can either return analogue ("axis") or boolean ("button") data - L2 and R2 buttons are unique in that they have both. The peculiar thing is how these values change based on how far the button is pressed - the below shows how the axis/button data changes as you press the shoulder button:

    There's an ambiguity between the button being halfway and fully pressed. I'm not sure if this is a bug or just a missing feature in pygame. However since I was just using the buttons to control rotation, I was able to carefully select a scaling value (1.0 = 90 degrees) to apply to the value we read so that it's not possible to see this. Computer graphics has a long and storied history of such careful hacks to give the illusion of something working, so I was happy to let this slide!

    I added a couple more things (shifting the cube around using the touchpad, lighting the cube depending on the buttons pressed) and the results are here this gist if anyone's curious.

    Finally I wanted to use the lightbar - each DualShock 4 has an array of Red, Green and Blue LEDs which can be set independently to 256 different brightness levels. This is put to neat use in things like Grand Theft Auto - when the police are on you the lightbar flashes blue and red.

    Sadly there doesn't appear to be any way to handle this through the pygame interface, and when I tried dropping to the lower-level evdev interface the leds() method didn't return anything.

    I did however notice that a couple of devices appeared in the /sys/class/leds directory, so I could play around with them by writing 0 .. 255 values to these devices. Sadly they're not so easy to detect (the device name changes each time it's connected) so I have to find the device names using a slightly nasty regex:

     

    ]]>
    Sean McLemon
    tag:blog.mclemon.org,2013:Post/1042278 2016-06-03T10:00:03Z 2020-10-15T19:09:06Z XMonad - NetworkManager menu using xmobar and Tkinter (xnm)

    I recently moved to using XMonad semi-fulltime, which is working out nicely most of the time. However the one sticking point is that when I try to work somewhere other than my flat or my office I had to drop to the command line and use nmcli to connect to wifi or to enable my builtin 4G modem.

    This is less than ideal, there doesn't appear to by any simple NetworkManager popup/interface I could integrate easily with my xmobar setup - I wanted to have a little icon launch a UI that allowed me to the wifi network to connect to or switch on my 4G modem.

    To address this I put together a little python app using Tkinter which updates network connectivity settings through NetworkManager's D-Bus API. The app is launched by clicking an area on Xmobar containing the dynnetwork widget beside a neat little old-school TTY icon:

    Clicking this area will raise a menu like the below - listing WiFi and Modem interfaces

    Above you can see that /dev/cdc-wdm0 is connected to my "Vodafone CZ" connection - there's a little chain icon beside it. Clicking on this connection would disconnect. Selecting one of the WiFi networks would have either connected automatically (if it was Open) or raised a popup asking for a password (if it was password-protected).

    To achieve this you need to do a couple of simple things. Firstly ensure that the necessary dependencies are installed, and checkout the code

        $ sudo apt-get install python-tk python-networkmanager
        $ git clone https://github.com/smcl/xnm
    

    Then ensure your xmobar template has the following line, which will display the dynnetwork info beside a TTY (assuming Font Awesome is installed as Additional Font #1 in xmobar):

    <action=`/home/sean/.xmonad/xnm.py`>%dynnetwork% <fn=1></fn></action>
    

    And that's it!

    ]]>
    Sean McLemon