Sunday, December 17, 2017

Announcing the 'debug' package

Haskell is a great language, but debugging Haskell is undoubtedly a weak spot. To help with that problem, I've just released the debug library. This library is intended to be simple and easy to use for a common class of debugging tasks, without solving everything. As an example, let's take a function we are interested in debugging, e.g.:

module QuickSort(quicksort) where
import Data.List

quicksort :: Ord a => [a] -> [a]
quicksort [] = []
quicksort (x:xs) = quicksort lt ++ [x] ++ quicksort gt
    where (lt, gt) = partition (<= x) xs

Turn on the TemplateHaskell and ViewPatterns extensions, import Debug, indent your code and place it under a call to debug, e.g.:

{-# LANGUAGE TemplateHaskell, ViewPatterns #-}
module QuickSort(quicksort) where
import Data.List
import Debug

debug [d|
   quicksort :: Ord a => [a] -> [a]
   quicksort [] = []
   quicksort (x:xs) = quicksort lt ++ [x] ++ quicksort gt
       where (lt, gt) = partition (<= x) xs
   |]

We can now run our debugger with:

$ ghci QuickSort.hs
GHCi, version 8.2.1: http://www.haskell.org/ghc/  :? for help
[1 of 1] Compiling QuickSort        ( QuickSort.hs, interpreted )
Ok, 1 module loaded.
*QuickSort> quicksort "haskell"
"aehklls"
*QuickSort> debugView

The call to debugView starts a web browser to view the recorded information, looking something like:

From there you can click around to explore the computation.

I'm interested in experiences using debug, and also have a lot of ideas for how to improve it, so feedback or offers of help most welcome at the bug tracker.

If you're interested in alternative debuggers for Haskell, you should check out the GHCi debugger or Hood/Hoed.

In Japanese: https://haskell.e-bigmoon.com/posts/2018-02-26-announcing-the-debug-package, by hgoetaroubig.

Tuesday, December 12, 2017

Benchmarking strchr vs memchr

Summary: memchr is faster, but the obvious implement seems to beat the builtin versions.

There are two related C functions for finding the next character in a string - strchr which assumes the string has a NUL character at the end, and memchr which takes the string length as an argument. For strings where you have the size and a NUL terminator, which is fastest? Using gcc 6.2.0 64bit MSYS2 on Windows 10, searching for a single byte 10M bytes along a string, the times were (fastest to slowest):

  • 11.05ms memchr implemented the obvious way.
  • 14.82ms strchr implemented the obvious way.
  • 14.96ms memchr provided by GCC.
  • 19.63ms strchr provided by GCC.

Trying on 3 different Windows computers, the results are all similar (but scaled).

Given the choice, you should prefer memchr over strchr.

Surprise result

The optimised implementations shipped with GCC are slower than the obvious C implementations taken from a wiki. I have absolutely no idea why. From what I can tell, the builtin versions are coded in assembly, operating on multiple bytes at a time, using SSE instructions. In contrast, the C variants operate on a single byte at a time, and aren't vectorised by the optimiser according to Godbolt. If anyone has an explanation I'd be keen to hear it.

Benchmark Code

To benchmark the variants I wrote a Haskell program using criterion. The full code and build instructions are available in this gist. I compiled the C code with -O3, using the gcc shipped with GHC 8.2.1. I've reproduced the Haskell code below, with some comments:

-- Import all the necessary pieces
import qualified Data.ByteString as BS
import qualified Data.ByteString.Unsafe as BS
import Criterion.Main
import Foreign
import Foreign.C.Types
import Data.Monoid

-- Make all the C imports
foreign import ccall unsafe "string.h memchr" memchr_std :: Ptr Word8 -> CInt -> CSize -> IO (Ptr Word8)
foreign import ccall unsafe "string.h strchr" strchr_std :: Ptr Word8 -> CInt -> IO (Ptr Word8)
foreign import ccall unsafe memchr_c :: Ptr Word8 -> CInt -> CSize -> IO (Ptr Word8)
foreign import ccall unsafe strchr_c :: Ptr Word8 -> CInt -> IO (Ptr Word8)

-- Method for ignoring the size when using strchr
ignoreSize f a b _ = f a b

-- Build a suitable string with an interesting character i bytes along
cstr i = BS.replicate i 32 <> BS.singleton 64 <> BS.replicate i 32 <> BS.singleton 0

-- The functions to benchmark
funs =
    [("memchr_std", memchr_std)
    ,("strchr_std", ignoreSize strchr_std)
    ,("memchr_c", memchr_c)
    ,("strchr_c", ignoreSize strchr_c)]

-- The main function, using Criterion
main = defaultMain
    [ seq bs $ bench (show i ++ " " ++ name) $ whnfIO $ test fun bs
    | i <- [1,10,100,1000,10000,100000,1000000,10000000]
    , let bs = cstr i
    , (name, fun) <- funs]

-- The function under test and input string
{-# NOINLINE test #-}
test fun bs =
    BS.unsafeUseAsCStringLen bs $ \(ptr,len) ->
        fun (castPtr ptr) 64 (fromIntegral len)

Sunday, November 26, 2017

Haskell exceptions and FFI wrappers

Summary: If you create a C function pointer from a Haskell function with "wrapper", and it throws an exception, bad things happen.

The Haskell FFI is incredibly powerful, allowing you to convert Haskell functions into C function pointers. In this post I'll give a quick example, then go into what happens if the Haskell function throws an exception. First, let's define a C function (and put it in a file called c.c):

int apply(int(*f)(int), int x)
{
    return f(x);
}

The piece int(*f)(int) says f is a function of type Int -> Int. The function apply is equivalent to $, restricted to int - it applies the first argument f to the second argument x and returns the result. We can call that in Haskell with:

foreign import ccall apply :: FunPtr (CInt -> IO CInt) -> CInt -> IO CInt
foreign import ccall "wrapper" wrap :: (CInt -> IO CInt) -> IO (FunPtr (CInt -> IO CInt))

main :: IO ()
main = do
    f <- wrap $ \x -> return $ x + 20
    res <- apply f 22
    print res

On the first line we wrap apply into a Haskell definition, turning a C function pointer into FunPtr. In the second we define a special "wrapper" FFI definition - the name "wrapper" is a specific string which is part of the FFI spec - it converts a Haskell function into a C function pointer. In main we put these pieces together, and other than the pervasive IO, it looks like the equivalent Haskell.

Note: In real code you should always call freeHaskellFunPtr after you have finished using a "wrapper" function, usually using bracket.

Consequences of Exceptions

What happens if the function we pass to wrap throws an exception? If you read the GHC manual, you'll find an incomplete link to the FFI spec, which stays silent on the subject. Thinking it through, Haskell has exceptions, but C does not - if the Haskell throws an exception it can't be passed back through C. Haskell can't provide a return value, so it can never resume the C code that called it. The GHC runtime can block indefinitely or kill the thread, both of which are fairly fatal for a program. As a consequence, I strongly recommend never throwing an exception from a function generated by "wrapper" - but what if we do?

Suggestion: most of the FFI addendum should probably be reproduced in the GHC manual with details around corner cases and exceptions.

Testing Exceptions

First, let's change our wrapped function to wrap $ \x -> fail "finish". Running that prints out:

bug.exe: user error (finish)

That seems like a standard exception. However, let's go further and put the entire program inside a finally, to show we have a normal Haskell exception:

main = flip finally (print "done") $ do
    ...

The output doesn't change - we never print out "done". It seems the exception thrown inside wrap aborts the program rather than bubbling up.

Suggestion: This error looks like a normal exception, but really isn't. It should say you have violated the wrapper invariant and your program has been violently aborted.

We've encountered bad behaviour, but can we go worse? Yes we can, by adding threads:

main = do
    replicateM_ 100 $ do
        forkIO $ do
            ff <- wrap $ \_ -> fail "die"
            print =<< apply ff 12
    threadDelay 10000000

Here we spawn 100 threads, each of which does an apply with an exception, then we wait for 10 seconds. The output is:

bug.exe: user error (die)
bug.exe: user error (die)
bug.exe: warning: too many hs_exit()s

It looks like there is a race condition with the exit path, causing two fatal wrapper exceptions to try and take down the runtime twice.

Suggestion: The hs_exit bug should be fixed.

Avoiding Exceptions

Now we know we need to avoid throwing exceptions inside "wrapper" functions, the obvious approach is to wrap them in a catch, e.g.:

wrap $ \x -> ... `catch` \(_ :: SomeException) -> return (-1)

Namely catch all exceptions, and replace them with -1. As usual with catch, it is important to force evaluation of the ... inside the catch (e.g. using catchDeep from safe-exceptions). If you want to recover the original exception you can capture it in an IORef and throw it after leaving C:

ref <- newIORef Nothing
f <- wrap $ \x -> ... `catch` \(e :: SomeException) -> do
    writeIORef ref $ Just e
    return (-1)
res <- apply f 22
whenJustM (readIORef ref) throwIO

However, what if there is an asynchronous exception after we leave the catch but before we return to C? From my experiments, this doesn't appear to be possible. Even though getMaskingState returns Unmasked exceptions thrown to the function inside wrapper appear to be deferred until the C code returns.

Suggestion: The documentation should clarify if my experiments are correct. Should getMaskingState return MaskedUninterruptible?

Friday, November 10, 2017

Ghcid with VS Code

Summary: New versions of Ghcid and the VS Code extension work even better together.

I've just released Ghcid v0.6.8 and the associated VS Code extension haskell-ghcid v0.2.0. Together they vastly simplify the Ghcid VS Code experience.

Ghcid reads .ghcid files

A new feature in Ghcid is that if there is a .ghcid file in the current directory it will load it as additional arguments. For example, in the Shake repo I have a .ghcid file:

-c "ghci -fno-code -ferror-spans"

Which tells ghcid to not guess at the command (e.g. using stack if you have a .stack-work) but always run ghci -fno-code -ferror-spans. This command works because I have a .ghci file which loads all the necessary files, while -fno-code speeds up compilation and -ferror-spans gives better error highlighting.

Ghcid VS Code starts ghcid

A new feature in the VS Code extension is the action Start Ghcid which starts a new ghcid terminal, writes the output to a temporary file, and uses that output to populate the Problems pane. Importantly, the extension runs ghcid with no command line arguments, so having a sensible .ghcid lets you control what it does.

The effect of these changes is that to start ghcid in VS Code is now a few key strokes, whereas before it required special flags, opening files, running commands etc.

Saturday, November 04, 2017

Understanding HLint rules

Summary: I added a degenerate foldr to map rule in the new version of HLint, here I describe how it works.

I've just released HLint 2.0.10, which includes a rule to recognise uses of foldr that should really be map. As an example:

foldr (\curr acc -> (+1) curr : acc) []

Can be rewritten as:

map (\curr -> (+1) curr)

Which is much more readable (and then subsequently HLint will suggest map (+1), which is vastly clearer than the initial foldr). The change required to HLint was to add a rule to the hlint.yaml saying:

- warn: {lhs: "foldr (\\c a -> x : a) []", rhs: "map (\\c -> x)"}

You can read this statement as saying if you see foldr (\c a -> x : a) [], suggest map (\c -> x) as a warning. The HLint matching engine then applies that template to every subexpression in your program. In the rest of the post I'll talk through the steps HLint performs.

Step 1: Unification

The first step is to try unifying the template foldr (\c a -> x : a) [] against the users subexpression, namely foldr (\curr acc -> (+1) curr : acc) []. HLint is trying to find assignments for the single-letter variables in the template (namely c, a and x) which cause it to match the subexpression. Unification proceeds top-down, and if it finds anything concrete that does not match (e.g. the user had written foldl) then it fails. In this case the unification succeeds with the bindings:

  • c = curr (from the first argument to the lambda)
  • a = acc (from the second argument to the lambda)
  • x = (+1) curr (from before the cons)
  • a = acc (from after the cons)

An example of a subexpression that would have failed unification is foldl (\curr acc -> (+1) curr : acc) [].

Step 2: Validity

The next step is to check that any value which has been bound more than once is equal in all bindings. In our case only a has been used twice, and it always binds to acc, so the unification is valid.

An example of a subexpression that would have failed validity is foldr (\curr acc -> (+1) curr : xs) [].

Step 3: Substitution

Now we've got some bindings, we can substitute them into the RHS, namely map (\c -> x). We replace c and x using the bindings above. Note that a isn't mentioned on the RHS, so we don't use it. After substitution we get:

map (\curr -> (+1) curr)

Step 4: Free variable check

Consider the expression foldr (\curr acc -> f acc : acc) []. Using the rules above we'd end up with map (\curr -> f acc), which is terrible, since we've gone from referring to a locally bound acc to whatever acc is in scope (if any). To solve that, we check that the result doesn't introduce any new free variables:

(freeVars result \\ freeVars hintRuleRHS) `isSubsetOf` freeVars original

Specifically any free variables introduced in the result, which weren't in the RHS (excluding the fake unification variables), must have been in the original subexpression.

With that, for foldr, we're done. There are a handful of other steps that apply in some cases.

Step A: Dot expansion in the template

If you write a hint map f (map g x) ==> map (f . g) x then HLint notices that also implies the rule map f . map g ==> map (f . g) and adds it. As a result, you shouldn't write your HLint rules in point-free style.

Step B: Dot/dollar expansion in the subexpression

When matching a subexpression HLint will expand f $ x and (f . g) x if doing so results in a match. These operators are used commonly enough that they are often treated more like brackets than functions.

Step C: Scope matching

When unifying qualified function names, HLint uses the active imports to guess whether they match. If you have import qualified Data.Vector as V then the subexpression V.length will unify with Data.Vector.length. Since HLint doesn't have complete import information it uses a few heuristics to figure out matching.

Step D: Scope moving

Similarly to scope matching on the LHS of a rule, after matching, HLint tries to requalify any necessary values on the RHS. As an example, assuming we are producing Data.Vector.null, if we know about import qualified Data.Vector as V then we suggest V.null.

Full code

To see the full code and all supporting definitions go to the HLint source, which defines matchIdea - here I show a gently simplified version. Given scope information, a rule (LHS and RHS) and a subexpression, we optionally produce a resulting expression after substitution.

matchIdea :: Scope -> HintRule -> Exp_ -> Maybe Exp_
matchIdea s HintRule{..} original = do
    u <- unifyExp hintRuleLHS original
    u <- validSubst u
    -- need to check free vars before unqualification, but after subst (with e)
    -- need to unqualify before substitution (with res)
    let result = substitute u hintRuleRHS
    guard $ (freeVars result Set.\\ Set.filter (not . isUnifyVar) (freeVars hintRuleRHS))
            `Set.isSubsetOf` freeVars original
        -- check no unexpected new free variables
    return result

Wednesday, September 20, 2017

Shake 0.16 - revised rule definitions

Summary: I've just released shake v0.16. A lot has changed, but it's probably only visible if you have defined your own rules or oracles.

Shake-0.16 is now out, 8 months since the last release, and with a lot of improvements. For full details read the changelog, but in this post I'm going to go through a few of the things that might have the biggest impact on users.

Rule types redefined

Since the first version of Shake there has been a Rule key value type class defining all rule types - for instance the file rule type has key of filename and value of modification time. With version 0.16 the type class is gone, rules are harder to write, but offer higher performance and more customisation. For people using the builtin rule types, you'll see those advantages, and in the future see additional features that weren't previously possible. For people defining custom rule types, those will require rewriting - read the docs and if things get tough, ask on StackOverflow.

The one place many users might encounter the changes are that oracle rules now require a type instance defining between the key and value types. For example, if defining an oracle for the CompilerVersion given the CompilerName, you would have to add:

type instance RuleResult CompilerName = CompilerVersion

As a result of this type instance the previously problematic askOracle can now infer the result type, removing possible sources of error and simplifying callers.

The redefining of rule types represents most of the work in this release.

Add cmd_

The cmd_ function is not much code, but I suspect will turn out to be remarkably useful. The cmd function in Shake is variadic (can take multiple arguments) and polymorphic in the return type (you can run it in multiple monads with multiple results). However, because of the overloading, if you didn't use the result of cmd it couldn't be resolved, leading to ugly code such as () <- cmd args. With cmd_ the result is constrained to be m (), so cmd_ args can be used.

Rework Skip/Rebuild

Since the beginning Shake has tried to mirror the make command line flags. In terms of flags to selectively control rebuilding, make is based entirely on ordered comparison of timestamps, and flags such as --assume-new don't make a lot of sense for Shake. In this release Shake stops trying to pretend to be make, removing the old flags (that never worked properly) and adding --skip (don't build something even if it is otherwise required) and --build (build something regardless). Both these flags can take file patterns, e.g, --build=**/*.o to rebuild all object files. I don't think these flags are finished with, but it's certainly less of a mess than before.

Sunday, September 17, 2017

Existential Serialisation

Summary: Using static pointers you can perform binary serialisation of existentials.

Many moons ago I asked how to write a Binary instance for a type including an existential, such as:

data Foo = forall a . (Typeable a, Binary a) => Foo a

Here we have a constructor Foo which contains a value. We don't statically know the type of the contained value, but we do know it has the type classes Typeable (so we can at runtime switch on its type) and Binary (so we can serialise it). But how can we deserialise it? We can store the relevant TypeRep when serialising, but when deserialising there is no mechanism to map from TypeRep to a Binary instance.

In Shake, I needed to serialise existentials, as described in the S4.1 of the original paper. My solution was to build a global mapping table, storing pairs of TypeRep and Binary instances for the types I knew were relevant. This solution works, but cannot deserialise anything that has not already been added to the global table, which required certain functions to live in weird places to ensure that they were called before deserialisation. Effective, but ugly.

Recently Abhiroop Sarkar suggested using the relatively new static pointers extension. This extension lets you turn top-level bindings with no arguments into a StaticPtr which can then be serialised/deserialsed, even between different instances of a process. To take advantage of this feature, we can redefine Foo as:

data Foo = forall a . (StaticFoo a, Binary a) => Foo a

class StaticFoo a where
    staticFoo :: a -> StaticPtr (Get Foo)

The approach is to switch from serialising the TypeRep (from which we try to look up Get Foo), to serialising the Get Foo directly. We can write a Binary Foo instance by defining put:

put :: Foo -> Put
put (Foo x) = do
    put $ staticKey $ staticFoo x
    put x

Here we simply grab a StaticPtr (Get Foo) which can deserialise the object, then use staticKey to turn it into something that can be serialised itself. Next, we write out the payload. To reverse this process we define get:

get :: Get Foo
get = do
    ptr <- get
    case unsafePerformIO (unsafeLookupStaticPtr ptr) of
        Just value -> deRefStaticPtr value :: Get Foo
        Nothing -> error "Binary Foo: unknown static pointer"

We first get the staticKey, use unsafeLookupStaticPtr to turn it into a StaticPtr (Get Foo) followed by deRefStaticPtr to turn it into a Get Foo. The unsafe prefix on these functions is justified - bugs while developing this code resulted in segfaults.

The final piece of the puzzle is defining StaticFoo instances for the various types we might want to serialise. As an example for String:

instance StaticFoo String where
    staticFoo _ = static (Foo <$> (get :: Get String))

We perform the get, wrap a Foo around it, and then turn it into a StaticPtr. All other types follow the same pattern, replacing String with Int (for example). The expression passed to static must have no free variables, including type variables, so we cannot define an instance for a, or even an instance for [a] - it must be [Char] and [Int] separately.

A complete code sample and test case is available here.

This approach works, and importantly allows extra constraints on the existential. The two disadvantages are: 1) that static isn't very flexible or easy to abstract over, resulting in a lot of StaticFoo boilerplate; 2) the static pointer is not guaranteed to be valid if the program changes in any way.

Will Shake be moving over to this approach? No. The next version of Shake has undergone an extensive rewrite, and in the process, moved away from needing this feature. A problem I had for 8 years has been solved, just as I no longer need the solution!

Sunday, August 20, 2017

Ghcid and VS Code

Summary: There's now a Ghcid VS Code addin that gives you red squiggles.

I've been using Ghcid for about 3 years, and VS Code for about 6 months. Ghcid alone is able to display the errors and warnings, and update them whenever you save. In this post I'll show how VS Code users can also click on errors to jump to them, and how to get red squiggles in the text buffer for errors.

Clicking on errors

Using a recent VS Code, if you run ghcid from the terminal window, hold Ctrl and click the filename and it jumps to the right location in the erroneous file.

Red squiggles

Red squiggles are now possible using the haskell-ghcid addin. To get it working:

  • Run ghcid -o ghcid.txt which will produce a file ghcid.txt which updates every time ghcid updates. Running the underlying ghci with -ferror-spans will significantly improve the errors reported.
  • Open ghcid.txt in VS Code as the active editor. Run the VS Code command (Ctrl+Shift+P) named "Watch Ghcid output".

These steps cause the ghcid errors to appear in the VS Code Problems pane, and have red squiggles in the editor. Even though the errors are in the proper problems pane, I still prefer the output provided by the ghcid terminal, so still look at that.

The VS Code addin is not very well polished - but I'm using it on a daily basis.

Thursday, July 06, 2017

HaskellX Bytes talk next Tuesday

I'm talking about "Static Analysis in Haskell" at HaskellX Bytes next Tuesday (11th July), in London UK. Registration is free. The abstract is:

Haskell is a strongly typed programming language, which should be well suited to static analysis - specifically any insights about the program which don't require running the program. Alas, while type systems are becoming increasingly powerful, other forms of static analysis aren't advancing as fast. In this talk we'll give an overview of some of the forms of non-type-based static analysis that do exist, from the practical (GHC warnings, HLint, Weeder) to the research (LiquidHaskell, Catch).

I'm a big fan of static analysis, so this will be part summary, part sales pitch, and part call to arms. Followed by beer.

Clarification: I originally got the day wrong, and the url persists the original incorrect day. The talk is on Tuesday 11th July.

Tuesday, June 20, 2017

Announcing Weeder: dead export detection

Most projects accumulate code over time. To combat that, I've written Weeder which detects unused Haskell exports, allowing dead code to be removed (pulling up the weeds). When used in conjunction with GHC -fwarn-unused-binds -fwarn-unused-imports and HLint it will enable deleting unused definitions, imports and extensions.

Weeder piggy-backs off files generated by stack, so first obtain stack, then:

  • Install weeder by running stack install weeder --resolver=nightly.
  • Ensure your project has a stack.yaml file. If you don't normally build with stack then run stack init to generate one.
  • Run weeder . --build, which builds your project with stack and reports any weeds.

What does Weeder detect?

Weeder detects a bunch of weeds, including:

  • You export a function helper from module Foo.Bar, but nothing else in your package uses helper, and Foo.Bar is not an exposed-module. Therefore, the export of helper is a weed. Note that helper itself may or may not be a weed - once it is no longer exported -fwarn-unused-binds will tell you if it is entirely redundant.
  • Your package depends on another package but doesn't use anything from it - the dependency should usually be deleted. This functionality is quite like packunused, but implemented quite differently.
  • Your package has entries in the other-modules field that are either unused (and thus should be deleted), or are missing (and thus should be added). The stack tool warns about the latter already.
  • A source file is used between two different sections in a .cabal file - e.g. in both the library and the executable. Usually it's better to arrange for the executable to depend on the library, but sometimes that would unnecessarily pollute the interface. Useful to be aware of, and sometimes worth fixing, but not always.
  • A file has not been compiled despite being mentioned in the .cabal file. This situation can be because the file is unused, or the stack compilation was incomplete. I recommend compiling both benchmarks and tests to avoid this warning where possible - running weeder . --build will use a suitable command line.

Beware of conditional compilation (e.g. CPP and the Cabal flag mechanism), as these may mean that something is currently a weed, but in different configurations it is not.

I recommend fixing the warnings relating to other-modules and files not being compiled first, as these may cause other warnings to disappear.

Ignoring weeds

If you want your package to be detected as "weed free", but it has some weeds you know about but don't consider important, you can add a .weeder.yaml file adjacent to the stack.yaml with a list of exclusions. To generate an initial list of exclusions run weeder . --yaml > .weeder.yaml.

You may wish to generalise/simplify the .weeder.yaml by removing anything above or below the interesting part. As an example of the .weeder.yaml file from ghcid:

- message: Module reused between components
- message:
  - name: Weeds exported
  - identifier: withWaiterPoll

This configuration declares that I am not interested in the message about modules being reused between components (that's the way ghcid works, and I am aware of it). It also says that I am not concerned about withWaiterPoll being a weed - it's a simplified method of file change detection I use for debugging, so even though it's dead now, I sometimes do switch to it.

Running with Continuous Integration

Before running Weeder on your continuous integration (CI) server, you should first ensure there are no existing weeds. One way to achieve that is to ignore existing hints by running weeder . --yaml > .weeder.yaml and checking in the resulting .weeder.yaml.

On the CI you should then run weeder . (or weeder . --build to compile as well). To avoid the cost of compilation you may wish to fetch the latest Weeder binary release. For certain CI environments there are helper scripts to do that.

Travis: Execute the following command:

curl -sL https://raw.github.com/ndmitchell/weeder/master/misc/travis.sh | sh -s .

The arguments after -s are passed to weeder, so modify the final . if you want other arguments.

Appveyor: Add the following statement to .appveyor.yml:

- ps: Invoke-Command ([Scriptblock]::Create((Invoke-WebRequest 'https://raw.githubusercontent.com/ndmitchell/weeder/master/misc/appveyor.ps1').Content)) -ArgumentList @('.')

The arguments inside @() are passed to weeder, so add new arguments surrounded by ', space separated - e.g. @('.' '--build').

What about Cabal users?

Weeder requires the textual .hi file for each source file in the project. Stack generates that already, so it was easy to integrate in to. There's no reason that information couldn't be extracted by either passing flags to Cabal, or converting the .hi files afterwards. I welcome patches to do that integration.


Sunday, June 11, 2017

Haskell Website Working Group - Update

Summary: We have agreed a set of principles for the website and are collecting information.

I'm writing this partly in my capacity as the chair of the Haskell Website Working Group, and partly as an individual (so blame me rather than the committee). It's fair to say that the original goal of the committee was to make sure everyone agrees on the download page. Discussions amongst the committee lead to a shared goal that the download page itself should clearly direct users along a precise path, without requiring beginners to make decisions requiring judgement. That probably means that download page should only describe one installation path, pushing alternatives onto a separate page.

To decide what should go on the download page, our first step was to evaluate what was currently available, and what experience a beginner might have. We've started that on this page. As an example, it says how to install Haskell, how to open ghci, how to install a tool etc - all using the different options.

When I actually tried installing and using the various options listed on the current download page, they all had confusing steps, unintuitive behaviour and problems that I had to overcome. As an example, Stack on Windows recommends using the 32bit version, while noting that only the 64bit version works. At the same time, Core Platform starts by telling me to edit a global Cabal config file.

I invite everyone to help contribute to that page, via pull requests. At the same time, it would be great if the issues raised could be ironed out, leading to a smooth beginner experience (I'm talking to maintainers in person and raising GitHub tickets). Once the information is collected, and ideally the tools have improved, it will be time to make a decision between the options. When the decision comes, it will be technically motivated, and hopefully unambiguous.

Sunday, May 21, 2017

Proving fib equivalence

Summary: Using Idris I proved the exponential and linear time fib functions are equivalent.

The Haskell wiki proclaims that Implementing the Fibonacci sequence is considered the "Hello, world!" of Haskell programming. It also provides multiple versions of fib - an exponential version and a linear version. We can translate these into Idris fairly easily:

fibExp : Nat -> Nat
fibExp Z = 0
fibExp (S Z) = 1
fibExp (S (S n)) = fibExp (S n) + fibExp n

fibLin' : Nat -> Nat -> Nat -> Nat
fibLin' Z a b = b
fibLin' (S n) a b = fibLin' n (a + b) a

fibLin : Nat -> Nat
fibLin n = fibLin' n 1 0

We've made the intermediate go function in fibLin top-level, named it fibLib' and untupled the accumulator - but it's still fundamentally the same. Now we've got the power of Idris, it would be nice to prove that fibExp and fibLin are equivalent. To do that, it's first useful to think about why fibLib' works at all. In essence we're decrementing the first argument, and making the next two arguments be fib of increasingly large values. If we think more carefully we can come up with the invariant:

fibLinInvariant : (d : Nat) -> (u : Nat) ->
    fibLin' d (fibExp (1+u)) (fibExp u) = fibExp (d+u)

Namely that at all recursive calls we must have the fib of two successive values (u and u+1), and after d additional loops we end up with fib (d+u). Actually proving this invariant isn't too hard:

fibLinInvariant Z u = Refl
fibLinInvariant (S d) u =
    rewrite fibLinInvariant d (S u) in
    rewrite plusSuccRightSucc d u in
    Refl

Idris simplifies the base case away for us. For the recursive case we apply the induction hypothesis to leave us with:

fibExp (plus d (S u)) = fibExp (S (plus d u))

Ignoring the fibExp we just need to prove d+(u+1) = (d+u)+1. That statement is obviously true because + is associative, but in our particular case, we use plusSuccRightSucc which is the associative lemma where +1 is the special S constructor. After that, everything has been proven.

Armed with the fundamental correctness invariant for fibLib, we can prove the complete equivalence.

fibEq : (n : Nat) -> fibLin n = fibExp n
fibEq n =
    rewrite fibLinInvariant n 0 in
    rewrite plusZeroRightNeutral n in
    Refl

We appeal to the invariant linking fibLin' and finExp, do some minor arithmetic manipulation (x+0 = x), and are done. Note that depending on exactly how you define the fibLinInvariant you require different arithmetic lemmas, e.g. if you write d+u or u+d. Idris is equipped with everything required.

I was rather impressed that proving fib equivalence wasn't all that complicated, and really just requires figuring out what fundamental property makes fibLin actually work. In many ways the proof makes things clearer.

Tuesday, May 16, 2017

Idris reverse proofs

Summary: I proved O(n) reverse is equivalent to O(n^2) reverse.

Following on from my previous post I left the goal of given two reverse implementations:

reverse1 : List a -> List a
reverse1 [] = []
reverse1 (x :: xs) = reverse1 xs ++ [x]

rev2 : List a -> List a -> List a
rev2 acc [] = acc
rev2 acc (x :: xs) = rev2 (x :: acc) xs

reverse2 : List a -> List a
reverse2 = rev2 []

Proving the following laws hold:

proof_reverse1_reverse1 : (xs : List a) -> xs = reverse1 (reverse1 xs)
proof_reverse2_reverse2 : (xs : List a) -> xs = reverse2 (reverse2 xs)
proof_reverse1_reverse2 : (xs : List a) -> reverse1 xs = reverse2 xs

The complete proofs are available in my Idris github repo, and if you want to get the most out of the code, it's probably best to step through the types in Idris. In this post I'll talk through a few of the details.

Directly proving the reverse1 (reverse1 x) = x by induction is hard, since the function doesn't really follow so directly. Instead we need to define a helper lemma.

proof_reverse1_append : (xs : List a) -> (ys : List a) ->
    reverse1 (xs ++ ys) = reverse1 ys ++ reverse1 xs

Coming up with these helper lemmas is the magic of writing proofs - and is part intuition, part thinking about what might be useful and part thinking about what invariants are obeyed. With that lemma, you can prove by induction on the first argument (which is the obvious one to chose since ++ evaluates the first argument first). Proving the base case is trivial:

proof_reverse1_append [] ys =
    rewrite appendNilRightNeutral (reverse1 ys) in
    Refl

The proof immediately reduces to reverse1 ys == reverse1 ys ++ [] and we can invoke the standard proof that if ++ has [] on the right we can ignore it. After applying that rewrite, we're straight back to Refl.

The :: is not really any harder, we immediately get to reverse1 ys ++ (reverse1 xs ++ [x]), but bracketed the wrong way round, so have to apply appendAssociative to rebracket so it fits the inductive lemma. The proof looks like:

proof_reverse1_append (x :: xs) ys =
    rewrite appendAssociative (reverse1 ys) (reverse1 xs) [x] in
    rewrite proof_reverse1_append xs ys in Refl

Once we have the proof of how to move reverse1 over ++ we use it inductively to prove the original lemma:

proof_reverse1_reverse1 : (xs : List a) -> xs = reverse1 (reverse1 xs)
proof_reverse1_reverse1 [] = Refl
proof_reverse1_reverse1 (x :: xs) =
    rewrite proof_reverse1_append (reverse1 xs) [x] in
    rewrite proof_reverse1_reverse1 xs in
    Refl

For the remaining two proofs, I first decided to prove reverse1 x = reverse2 x and then prove the reverse2 (reverse2 x) = x in terms of that. To prove equivalence of the two functions I first needed information about the fundamental property that rev2 obeys:

proof_rev : (xs : List a) -> (ys : List a) ->
    rev2 xs ys = reverse2 ys ++ xs

Namely that the accumulator can be tacked on the end. The proof itself isn't very complicated, although it does require two calls to the inductive hypothesis (you basically expand out rev2 on both sides and show they are equivalent):

proof_rev xs [] = Refl
proof_rev xs (y :: ys) =
    rewrite proof_rev [y] ys in
    rewrite sym $ appendAssociative (rev2 [] ys) [y] xs in
    rewrite proof_rev (y :: xs) ys in
    Refl

The only slight complexity is that I needed to apply sym to switch the way the appendAssocative proof is applied. With that proof available, proving equivalence isn't that hard:

proof_reverse1_reverse2 : (xs : List a) -> reverse1 xs = reverse2 xs
proof_reverse1_reverse2 [] = Refl
proof_reverse1_reverse2 (x :: xs) =
    rewrite proof_rev [x] xs in
    rewrite proof_reverse1_reverse2 xs in
    Refl

In essence the proof_rev term shows that rev behaves like the O(n^2) reverse.

Finally, actually proving that reverse2 (reverse2 x) is true just involves replacing all the occurrences of reverse2 with reverse1, then applying the proof that the property holds for reverse1. Nothing complicated at all:

proof_reverse2_reverse2 : (xs : List a) -> xs = reverse2 (reverse2 xs)
proof_reverse2_reverse2 xs =
    rewrite sym $ proof_reverse1_reverse2 xs in
    rewrite sym $ proof_reverse1_reverse2 (reverse1 xs) in
    rewrite proof_reverse1_reverse1 xs in
    Refl

If you've got this far and are still hungry for more proof exercises I recommend Exercises on Generalizing the Induction Hypothesis which I have now worked through (solutions if you want to cheat).

Sunday, May 07, 2017

Proving stuff with Idris

Summary: I've been learning Idris to play around with writing simple proofs. It seems pretty cool.

The Idris programming language is a lot like Haskell, but with quite a few cleanups (better export syntax, more though out type classes), and also dependent types. I'm not a massive fan of dependent types as commonly presented - I want to write my code so it's easy to prove, not intertwine my beautiful code with lots of invariants. To try out Idris I've been proving some lemmas about list functions. I'm very much an Idris beginner, but thought I would share what I've done so far, partly to teach, partly to remember, and party to get feedback/criticism about all the beginner mistakes I've made.

Before proving stuff, you need to write some definitions. Idris comes with an append function (named ++ as you might expect), but to keep everything simple and out in the open I've defined:

append : List a -> List a -> List a
append [] ys = ys
append (x :: xs) ys = x :: append xs ys

Coming from Haskell the only real difference is that cons is :: and types are :. Given how Haskell code now looks, that's probably the right way around anyway. We can load that code in the Idris interactive environment and run it:

$ idris Main.idr
Main> append [1,2] [3]
[1,2,3] : List Integer

What we can also do, but can't easily do in Haskell, is start proving lemmas about it. We'd like to prove that append xs [] = xs, so we write a proof:

proof_append_nil : (xs : List a) -> append xs [] = xs
proof_append_nil xs = ?todo

We name the proof proof_append_nil and then say that given xs (of type List a) we can prove that append xs [] = xs. That's pretty direct and simple. Now we have to write the proof - we first write out the definition leaving ?todo where we want the proof to go. Now we can load the proof in Idris and type :t todo to get the bit of the proof that is required, and Idris says:

todo : append xs [] = xs

That's fair - we haven't proven anything yet. Since we know the proof will proceed by splitting apart the xs we can rewrite as:

proof_append_nil [] = ?todo_nil
proof_append_nil (x :: xs) = ?todo_cons

We can now ask for the types of the two remaining todo bits:

todo_nil : [] = []
todo_cons : x :: append xs [] = x :: xs

The first todo_nil is obviously true since the things on either side of the equality match, so we can replace it with Refl. The second statement looks like the inductive case, so we want to apply proof_append_nil to it. We can do that with rewrite proof_append_nil xs, which expands to append xs [] = xs, and rewrites the left-hand-side of the proof to be more like the right. Refined, we end up with:

proof_append_nil [] = Refl
proof_append_nil (x :: xs) = rewrite proof_append_nil xs in ?todo_cons

Reloading again and asking for the type of todo_cons we get:

todo_cons : x :: xs = x :: xs

These things are equal, so we replace todo_cons with Refl to get a complete proof of:

proof_append_nil : (xs : List a) -> append xs [] = xs
proof_append_nil [] = Refl
proof_append_nil (x :: xs) = rewrite proof_append_nil xs in Refl

Totality Checking

Proofs are only proofs if you have code that terminates. In Idris, seemingly sprinkling the statement:

%default total

At the top of the file turns on the totality checker, which ensures the proof is really true. With the statement turned on I don't get any warnings about totality issues, so we have proved append really does have [] as a right-identity.

Avoiding rewrite

The rewrite keyword seems like a very big hammer, and in our case we know exactly where we want to apply the rewrite. Namely at the end. In this case we could have equally written:

cong (proof_append_nil xs)

Not sure which would be generally preferred style in the Idris world, but as the proofs get more complex using rewrite certainly seems easier.

Differences from Haskell

Coming from a Haskell background, and sticking to simple things, the main differences in Idris were that modules don't have export lists (yay), lists are :: and types are : (yay), functions can only use functions defined before them (sad, but I guess a restriction to make dependent types work) and case/lambda both use => instead of -> (meh).

Next steps

For my first task in Idris I also defined two reverse functions:

reverse1 : List a -> List a
reverse1 [] = []
reverse1 (x :: xs) = reverse1 xs `append` [x]

rev2 : List a -> List a -> List a
rev2 acc [] = acc
rev2 acc (x :: xs) = rev2 (x :: acc) xs

reverse2 : List a -> List a
reverse2 = rev2 []

The first reverse1 function is reverse in O(n^2), the second is reverse in O(n). I then tried to prove the three lemmas:

proof_reverse1_reverse1 : (xs : List a) -> xs = reverse1 (reverse1 xs)
proof_reverse2_reverse2 : (xs : List a) -> xs = reverse2 (reverse2 xs)
proof_reverse1_reverse2 : (xs : List a) -> reverse1 xs = reverse2 xs

Namely that both reverse1 and reverse2 are self inverses, and then that they are equivalent. To prove these functions required a few helper lemmas, but only one additional Idris function/feature, namely:

sym : (left = right) -> right = left

A function which transforms a proof of equality from one way around to the other way around. I also required a bunch of helper lemmas, including:

proof_append_assoc : (xs : List a) -> (ys : List a) -> (zs : List a) -> append xs (append ys zs) = append (append xs ys) zs

Developing these proofs in Idris took me about ~2 hours, so were a nice introductory exercise (with the caveat that I've proven these lemmas before, although not in 4+ years). I'd invite anyone interested in learning this aspect of Idris to have a go, and I'll post my proofs sometime in the coming week.

Update: additional notes on this material can be found here from Reddit user chshersh.


Friday, April 28, 2017

HLint on Travis/Appveyor

Summary: Running HLint on your CI is now quick and easy.

I've always wanted to run HLint on my continuous integration servers (specifically Travis for Linux and Appveyor for Windows), to automatically detect code that could be improved. That has always been possible, and packages like lens and freer-effects have done so, but it was unpleasant for two reasons:

  • Setting up a custom HLint settings file and applying these settings was a lot of upfront work.
  • Building hlint on the CI server could be quite slow.

With HLint v2.0.4, both of these issues are addressed. I am now running HLint as standard for many of my projects. The two steps are outlined below.

Setting up custom HLint settings

Locally run hlint . --default > .hlint.yaml and it will generate a file which ignores all hints your project currently triggers. If you then run hlint . there will be no warnings, as the ignore settings will automatically be picked up. Check in .hlint.yaml.

Later, as a future step, you may wish to review your .hlint.yaml file and fix some of the warnings.

Running HLint on the server

There are now precompiled binaries at GitHub, along with scripts to download and execute them for each CI system. In both cases below, replace ARGUMENTS with your arguments to hlint, e.g. . to check the current directory.

On Travis, execute the following command:

wget https://raw.github.com/ndmitchell/hlint/master/misc/travis.sh -O - --quiet | sh -s ARGUMENTS

On Appveyor, add the following statements to .appveyor.yml:

- set PATH=C:\Program Files\Git\mingw64\bin;%PATH%
- curl -ohlint.bat -L https://raw.githubusercontent.com/ndmitchell/hlint/master/misc/appveyor.bat
- hlint ARGUMENTS

Since these are precompiled binaries the additional time required to run HLint should be minimal.

Thursday, April 06, 2017

HLint 2.0 - with YAML configuration

Summary: I've just released HLint 2.0, which lets you configure the rules with a YAML file.

I've just released HLint 2.0 to Hackage. HLint has always been configurable by writing a specially crafted Haskell file (e.g. to ignore certain hints or add new ones). With HLint 2.0 we now support YAML configuration files, and moreover encourage them over the Haskell format.

New Features of 2.0

Perhaps the most useful feature of this version is that HLint will search the current directory upwards for a .hlint.yaml file containing configuration settings. If a project includes a .hlint.yaml in the root then you won't need to pass it on the command line, and there's a reasonable chance any editor integration will pick it up as well. This idea was shamelessly stolen from Stylish Haskell.

HLint configuration files have also moved from Haskell syntax with special interpretation to YAML syntax. The Haskell syntax had become quite overloaded and was increasingly confused. The YAML syntax gives a fresh start with a chance to properly specify configuration directly rather than encoding it as Haskell expressions. The YAML configuration format enables a few features in this release, and going forward should enable more. To create a template .hlint.yaml file run hlint --default > .hlint.yaml. HLint continues to work without a configuration file.

In addition to a bunch of little hints, there is now a hint to suggest giving modules explicit export lists. I tend to always favour export lists, using module Foo(module Foo) where in the rare case they seem genuinely too much work. If you object to the hint, simply add to .hlint.yaml:

- ignore: {name: Use module export list}

On the other extreme, if you always want to require a complete and explicit export list (banning the trick above) do:

- warn: {name: Use explicit module export list}

Which enables the off-by-default hint requiring an explicit list. To see which off-by-default hints your program triggers pass the command line argument --show.

The biggest new hint is that you can now mark certain flags/extensions/imports/functions as being restricted - maintaining either a whitelist or blacklist and exceptions. As an example, HLint itself contains the restrictions:

- functions:
  - {name: unsafeInterleaveIO, within: Parallel}
  - {name: unsafePerformIO, within: [Util.exitMessageImpure, Test.Util.ref]}
  - {name: unsafeCoerce, within: []}

These unsafe functions can only be used in the modules/functions listed (or nowhere for unsafeCoerce), ensuring no code sneaks in unexpectedly. As another example:

- flags:
  - default: false
  - {name: [-fno-warn-incomplete-patterns, -fno-warn-overlapping-patterns]}

Here we have used default: false to say that any flags not explicitly allowed cannot be used. If someone accidentally checks in development code with {-# OPTIONS_GHC -w #-} then HLint will catch it. This restriction feature is particularly designed for code reviews.

What Might Have Broken

This release changes a lot of stuff. I am aware the following will no longer work:

  • The HLint2 API has been deleted - I don't think anyone was using it (the HLint and HLint3 ones remain unchanged). I'll continue to evolve the API in future releases.
  • The Haskell configuration file no longer has a way of importing other configuration files. At the same time I made it so the default configuration file and internal hints are always included, which I hopefully offsets most of the impact of ignoring hint imports.
  • I may have broken some other things by accident, but that's why there is a bug tracker.

At the moment the Haskell and YAML configuration files are both supported. However, I'll be phasing out the Haskell configuration in the future, and encourage people to move to the YAML. The conversion should be pretty simple.

Monday, April 03, 2017

Code Review Reviewed

Summary: I used to be mildly against code review on all merges. Now I'm for it. Code review is a good idea for knowledge sharing, not spotting bugs.

For my open-source projects, I accept pull requests from external contributors which I review, but my contributions are probably not routinely looked at by anyone else. I suspect most non-huge open-source projects follow the same pattern. For larger projects code review seems more common, but far from ubiquitous.

Until recently, I was in favour of occasional code review, e.g. towards the end of a project taking a look at the code. The problem with that approach is that it is hard to find the right time - at the end there is little appetite for large-scale refactoring and when the project is busy cranking out code there is no time and the code is still changing. While I was a fan of this idea in theory, it never worked in practice.

Some teams use code review on every merge to master, perhaps several times per person per day. That's something I used to disagree with, but I asked some smart people, and their explanations changed my opinion. The important realisation was thinking about what code review is for.

  • Checking the code works: NO. Code review is not about checking the code works and is bug free. I have tests, and leverage the type system, which is meant to convince me and other readers that my code is correct. For certain very sensitive bits a second set of eyes checking for bugs may be useful, but if all your code is that sensitive it will also be bug-ridden. However, code review may spot the absence or inadequacy of tests.
  • Checking simple rules and conventions: NO. Often projects have rules about which packages/extensions can be used, tab width and line length etc. While these checks can be carried out by a human, they are much better automated. Even when "code reviewing" students work at York University (aka marking) it quickly became clear that I was wasting time on trivialities, and thus wrote HLint to automate it.
  • High-level design: A BIT. Code review should think about the high-level design (e.g. should this be a web server or command line tool), but often the code is too detailed to properly elucidate these choices. High-level design should usually be done on a bug tracker or whiteboard before writing code.
  • Mid-level design: YES. Code review is often very good at challenging details around the mid-level design - e.g. does the choice of Text vs ByteString make sense, is there some abstraction that needs introducing. The author of some code is often heavily influenced by the journey they took to get to some point, by seeing code without that history a reviewer can often suggest a better approach.
  • Spreading knowledge: YES. Code review is great for spreading knowledge of the code to others in the team. Code review requires the code to be written in such a way that someone else can understand it, and ensures that there is someone else who is familiar with it. For larger projects that's invaluable when someone goes on holiday.
  • Team cohesion: YES Code review requires different members of the team to share their work with each other, and to have an appreciation of what other people are working through and the challenges it presents. It's all too easy to declare a problem "easy" without understanding the details, and code review removes that temptation.
  • Training: YES. Code review is also good for teaching new techniques and approaches to the submitter. As an example, some problems are trivial to solve with a generics library, yet some programmers aren't familiar with any - code review helps with ongoing training.

In contrast, I think the disadvantages of code review all revolve around the time required:

  1. It takes other peoples time to do the review, when they could be doing something else.
  2. It takes the submitters time waiting for a review - often the next step is hard to do without the previous step being agreed. As a result, I think code review work should be prioritised.
  3. As a consequence of the two previous issues, code review changes the "unit of work", leading to more changes in a single bundle.

These issues are real, but I think the benefit of knowledge sharing alone outweighs the cost.