Sunday, July 29, 2012

On binary search

There are many articles about the pedagogical benefits of writing a binary search algorithm. Instead of just implementing binary search, I wanted to take it one step furtuer. Is is possible to write a standards compliant, formally correct, and beautiful implementation of bsearch() in C?

// This type simplifies the 
// argument list of bsearch.
typedef int (*compare_f)(
  const void *x,
  const void *y);
void *
  const void *key,
  const void *base,
  size_t nel,
  size_t width,
  compare_f compar)
  size_t k;
  size_t lo = 0;
  size_t hi = nel - 1;
  const void *mid;

  while (lo <= hi) {
    // If we used (lo + hi)/2, then it
    // would overflow for large integers. 
    // The following formula prevents that.
    k = lo + (hi - lo)/2;
    mid = base + k*width;

    // If we used compar(mid, key), then it 
    // would violate POSIX, which specifies
    // that key must be the first argument.
    int cmp = compar(key, mid);
    if (cmp < 0) hi = k - 1;
    if (cmp > 0) lo = k + 1;
    if (cmp == 0)
      return (void *)mid;

  return NULL;

Note that not every argument and variable have been colorized, just the ones that usually give people the hardest time when reasoning about this function. The average (k) can be reduced to (lo + hi)/2 if the array you are handling is less than 263 elements long, but since it isn't technically correct, I didn't write it that way.

Saturday, May 12, 2012


Today I want to take a brief look at byte swapping from a different point of view. Many algorithms take mathematical concepts and turn them into a step-by-step process, but what if we turn it around, and mathematize a process that originates from computer science?
The idea of mathematizing computer programs is generally done within the context of the design of high-reliability systems and formal methods, which provide a way to prove that a program is correct. Perhaps if we took the time to ensure the correctness of the programs we write, then there wouldn't be so many bugs!

Monday, April 30, 2012

GoLang Highlights

I have been using Go quite frequently lately, and I would like to talk about the experience. There are several introductions to Go floating around, and reactions to its different conventions and ways of doing things than some people are used to. First, I would like to make it clear, though, that I have only used the gc compiler, as it was impossible to install gccgo on Mac OS X at the time of this writing.

What makes Go similar to X?

  • garbage collection
  • interface embedding
  • struct embedding
What makes Go better than X?
  • type syntax (backwards = fowards)
  • typed channels (no serialization)
  • implicit interfaces
  • segmented stacks


Every scripting language has some form of garbage collection, so Go is not unique in that respect. Interface embedding is when you used a named interface type instead of a method when defining an interface type. It is similar to Haskell's => when constraining a type class. Struct embedding is when you use a named struct type instead of a field declaration when defining a struct type. This is very similar to inheritance in OOP languages, and it also gives the method sets of those child structs to the parent struct. Both of these are called embedding because they are syntactically similar, you just list the typename. However, they mean different things, and it's important to remember this.

Type Syntax

One of the distinguishing features of Go is its beautiful type syntax. Many programmers have noticed that writing types in C can be a headache, because when you read the type aloud, you have to read backward, or sometimes you have to skip around, back and forth in order to read the type properly. In Go there is no such problem, because the types are written exactly how you would say them in English. While this may be an issue in other languages, most speakers of languages of languages with SVO (Subject-Verb-Object) structure would be happy to see this change. It makes programming in Go a pleasure, and contributed greatly to my own productivity in my Go projects.

Typed Channels

Another Go feature is channels. Go channels are different from most other forms of I/O found in other languages because they are not based on bytes or characters, but data. Typed data. Which means instead of being required to serialize and deserialize your data when communicating between threads, or parts of a larger design, you can send them over a channel as is, knowing that you can use the data immediately instead of validating it first. Serialization is also a big issue with large software projects, since they may be written in many languages, which requires serialization in order to get data from one component to another. Go also has a serialization format, called gob, which can be considered a codec for most major types in the language. So you don't have to serialize, but if you want to, then it's available.

Implicit Interfaces

Another distinguishing feature of Go is implicit interfaces. Most languages with interfaces require explicit statements of implementation, such as Java and Haskell (for type classes). Go has no such requirement (and no such syntax). An interfaces is a collection of methods. A named type has an associated set of methods attached to it, called its method set. A named type implements an interface iff the methods it requires are a subset of the named type's method set. That's it! It's that simple. And because you don't have to make it explicit, it allows you to focus on more important things.

Segmented Stacks

The first and foremost under-appreciated and under-documented feature found only in Go is that of segmented stacks. Segmented stacks is what Go developers call the combination of implementation choices that led to the Go calling convention. It is an intersection of many features, such as: dynamic stack allocation, runtime checks, variable arguments, and deferred functions. Simply put, there is so much that happens between 'return' the statement, and 'RET' the instruction. First, the Go runtime has to check to see if there are any deferred functions to run, then it has to see if it allocated any stack space for the current function, and if it did and it isn't required anymore, then it frees the stack. Similarly, when a function is called, one of the first things that happens is the Go runtime checks to see if more stack space is required. What this means is that - in Go - there are no stack overflows!

Since segmented stacks are such an interesting feature, I suspect that I will revisit them in the future with a more in-depth discussion than I can ever hope to achieve in one paragraph.

Friday, March 2, 2012

Gödel and powerbooleans

What is a boolean?

A boolean is either true or false.

What is a powerboolean?

A powerboolean is a subset of the booleans (B):

  • True = {true}
  • Both = {true, false}
  • False = {false}
  • Null = {}

This set is equivalent to 2B (the powerset of the booleans), hence the name. The powerboolean operations are simply defined as the image of the normal boolean operations over B. The truth tables are as follows:


These truth tables should not be confused with Belnap logic (which is another 4-valued logic) because there are different definitions for (N AND B), (B AND N), (N OR B), (B OR N). Also, In Belnap logic, B and N can be swapped, and the same rules apply, so they are not unique. In the powerbooleans, B and N cannot be swapped in a similar way, because if you do swap them, the truth tables would have to change. So the powerbooleans have unique values, they aren't just repetitions of some third truth value found in most 3-value logics. The 4-values of the powerbooleans are truely unique.

Who is Gödel?

Gödel is best known for his incompleteness theorems, but he is also known for the fuzzy logic named after him: "Gödel–Dummett" logic (also known as superintuitionistic, intermediate, or minimum t-norm logic). I've talked about product fuzzy logic before, but this time I'd like to talk about Gödel fuzzy logic. The operations are as follows:

  • (x AND y) = min(x, y)
  • (x OR y) = max(x, y)
  • NOT(x) = 1 - x

These operations are defined over a continuous interval from 0 to 1, so I can't really give truth tables for these, but to cut to the chase, they would look very similar to the tables above.

What do they have to do with each other?

If we let NOT(x) = (if x < 0, then x, else 1 - x), extend the interval of Gödel fuzzy logic below zero (which forces OR to be redefined in terms of our new NOT), and assign the following fuzzy values to each powerboolean:

  • True = 1
  • Both = 1/2
  • False = 0
  • Null = -1/2

then Gödel fuzzy logic and powerboolean operations are the same.

Thursday, February 16, 2012

A survey of 'divmod'

Rounding happens, and it happens far more than most people are aware of. As your web browser tries divide a web page (say, 917 pixels across) into 4 equal parts it uses rounding to find how many pixels each column is. Most of the time it is of little importance which direction numbers are rounded, but in some applications, it can be very noticeable. The solution to this problem is a simple calculation involving divmod:

  divmod(917, 4) = (229, 1)

which means a web page which is 917 pixels across can be divided into 4 columns, each of which is at least 229 pixels across, with 1 pixel left over. This concludes our example.

If the numbers being divided are real numbers or integers then there are many, many ways to round the division. If the numbers being divided are positive or unsigned integers, then there are fewer rounding modes (because rtn=rtz and rtp=rta) but there are still many ways. Despite how these modes may seem equivalent for mathematically positive numbers, they are different for fixed machine-size integers. For example, rtn is the same computationally on int32_t and uint32_t, as is rtp, but rtz and rta produce a different computational algorithms on signed and unsigned types. Regardless of the rounding mode chosen, the divmod axiom states:

  divmod(dividend, divisor) = (quotient, remainder)


  dividend = divisor*quotient + remainder

or put in other words, quotient is approximately (dividend/divisor), but which way it is rounded is up to the rounding mode. This is true for every variant in this article except for divmod_euc, which we will discuss later.

This brings us to our analysis of the most common variants, which are usually called truncated division (also called quo-rem), and floored division (also called div-mod). The systems surveyed are: C (POSIX), OpenCL (a variant of C), libMPFR (the R stands for rounding), and Scheme (R5RS, R6RS, and a R7RS draft),

• "round towards zero"
• 'rtz' suffix (OpenCL)
• x86:idiv (CPU instruction)
• c99:% (C operator)
• c:mod[fl]? (POSIX)
• c:trunc[fl]? (POSIX)
• rnrs:truncate (Scheme)
• r5rs:quotient (Scheme)
• r5rs:remainder (Scheme)
• r7rs:truncate/ (Scheme)
• "rem() has same sign as dividend"

• "round towards negative infinity"
• 'rtn' suffix (OpenCL)
• c:floor[fl]? (POSIX)
• rnrs:floor (Scheme)
• r5rs:modulo (Scheme)
• r7rs:floor/ (Scheme)
• "mod() has same sign as divisor"

The problem with these two variants being so common is that it is easy to misuse them. It is quite common to use (A % B) as an index of an array of B elements, without checking if A is positive first. This introduces a buffer overflow error when the program tries to get the -4th element of the array, because that memory address may not be allocated yet, and even worse, if it is allocated then it's probably the wrong data! The core issue with both rtz and rtn rounding modes is that they may give negative remainders. However, there is less possibility of errors with mod_rtn when B is positive, because mod_rtn gives a positive remainder in that case. C99 was the first version of C that actually specified that "%" was mod_rtz, but before the 1999 version, that operator could also be mod_rtn, in which case the operator would be safe. Since C99 standardized on mod_rtz, however, now we know that "%" is unsafe, and therefore, we should always test if A is positive first. Another option would be to use mod_euc, which is discussed later in this article.

The next variant doesn't have a name, but it is a division involving the ceiling() function. Maybe someday we'll have a name for it.

• "round towards positive infinity"
• 'rtp' suffix (OpenCL)
• c:ceil[fl]? (POSIX)
• rnrs:ceiling (Scheme)
• r7rs:ceiling/ (Scheme)

I couldn't find any implementations of functions for the next two variants, but there is a rounding mode constant in MPFR. Anyways, here they are:

• "round away form zero"

• "round to nearest, ties towards zero"
• (no examples)

There are also some rounding modes used in POSIX (also known as the Single UNIX Specification (the two standards have been synchronized since 2001), what most C programs use) that appear to depend on the environment for which rounding mode to use. The first is completely dependent on the current rounding mode, and the second is basically rn_ (round to nearest), except that ties are rounded according to the current rounding mode. The third seems uncommon in that C is the only language that uses it.

• "round ... default behavior"
• c:nearbyint[fl]? (POSIX)

• "round to nearest, ties ... default behavior"
• c:l?l?rint[fl]? (POSIX)

• "round to nearest, ties away from zero"
• c:l?l?round[fl]? (POSIX)

The next variants are vary rare, because they cannot be found in C:

• "round to nearest, ties towards negative infinity"
• "Round half down" (Wikipedia)
• r6rs:div0, r6rs:mod0 (Scheme)

• "round to nearest, ties towards positive infinity"
• "Round half up" (Wikipedia)
• elementary school rounding (in the U.S.)

The next variant, known as rte (round ties to even), is probably the fairest variant, in that it has no bias. Most of the variants above have a bias towards positive numbers or negative numbers, or a bias towards zero. The following variant is probably most well-known as being specified by the IEEE-754 floating-point standard. Its claim to fame is that it is unbiased, and it does not introduce any tendancies towards any particular direction.

• "round to nearest, ties to even"
• "Round half to even" (Wikipedia)
• 'rte' suffix (OpenCL)
• c:remquo[fl]? (OpenCL & POSIX)
• c:remainder[fl]? (OpenCL & POSIX)
• r7rs:round/ (Scheme)

The next variant is by far the most advanced divmod on the planet. It is so advanced that it cannot be described by a rounding mode. All of the variants above can be described as div_?(a,b) = round_?(a/b) where the ? are replaced with a rounding mode, but not the next one. Its definition would be div_euc(a,b) = sign(b)*floor(a/abs(b)), but that doesn't even begin to describe it's awesomeness. The reason why div_euc is so amazing is that mod_euc is always nonnegative. Period.

• r6rs:div, r6rs:mod (Scheme)
• r7rs:euclidean/ (Scheme)

I am not the first to notice this. There is a paper titled "The Euclidean definition of the functions div and mod" (from 1992) and R6RS Scheme is also very insistant on this algorithm. In the paper, the author mentions how "definitional engineering" is at fault for the difficulties in using other rounding systems and in particular, fields in computer science such as arithmetic coding, arbitrary precision arithmetic, and programming language foundations all would benefit from (or in my opinion: require) the Euclidean definition, and yet there is no programming language (except for Algol and Scheme) that uses it.


  • OpenCL 1.1 Section Rounding Modes.
  • C99/POSIX <fenv.h> and <math.h>.
  • Revised (5|6|7) Report on Scheme.
  • The libmpfr documentation.
  • The Euclidean definition of the functions div and mod. Raymond Boute. ACM Transactions on Programming Languages and Systems, Vol 14, No. 2, April 1992, Pages 127-144.

Saturday, January 14, 2012

On 'int128_t'

Every programming language has built-in integer types, both signed (representing mathematical integers) and unsigned (representing mathematical nonnegative integers). C compilers usually give these types fancy names like 'unsigned long long', and have a nasty habit of changing the sizes (and meanings) of these types on different platforms. The C type 'short' is usually 16 bits, 'int' either 16 or 32 bits, and 'long' either 32 or 64 bits, depending on platform. This eventually led to the 'stdint.h' header in C99, which provides exact-size types which can be used accross platforms. These are named 'int32_t', 'int64_t', 'uint32_t', 'uint64_t', and so on.

With stuff getting bigger, it's natural to ask the question: "Why 64?" and the answer is generally because the highest integer type most hardware can deal with is 64 bits. Can we go higher? Of course! but how? In this article, I will show you how to define 'int128_t' and 'uint128_t' in C without any compiler hacks. They can be used as parameter types and return types from functions, and they don't require any special memory management or allocation, because they're not pointer types.

First, you might say we could just make an array type:
 typedef int32_t int128_as_int32x4_t[4];
typedef int64_t int128_as_int64x2_t[2];
but which one to we pick? What we really need is a union of each of these, so we can decide later which array type to use. However, neither arrays nor unions can be used as return values from functions, only struct's can be used as return values. So in order to have a type that can be used as a return value we need to make a struct of a union of array types, as follows:
typedef struct int128_s {
union int128_u {
int8_t as_int8[16];
int16_t as_int16[8];
int32_t as_int32[4];
int64_t as_int64[2];
} value;
} int128_t;
and wrap this type in a typedef. But how do we use these new integers? First of all we need some way of constructing 'int128_t's, and in the spirit of 'stdint.h' we can make a 'INT128_C()' macro which expands to a constructed object of type 'int128_t'. We'll need a few functions for this:
int128_t int128_from_int(int from);
int128_t int128_from_str(char *from);
int int_from_int128(int128_t from);
int str_from_int128(char *to, int to_size, int128_t from);
and we can use the second one to define the macro as:
#define INT128_C(x) int128_from_str(#x)
because '(#x)' indicates to the preprocessor to turn x into a string before compile-time, which is then passed to int128_from_str which then returns an object of type 'int128_t'. For compilers that do not support compile-time constant expressions involving function calls, we can also define simpler macros as follows:
#define INT128_C64(a,b)\
(int128_t){.value = {.as_int64 = {a, b}}}
#define INT128_C32(a,b,c,d)\
(int128_t){.value = {.as_int32 = {a, b, c, d}}}
#define INT128_C64(a,b)\
(int128_t){.value = {.as_int64 = {b, a}}}
#define INT128_C32(a,b,c,d)\
(int128_t){.value = {.as_int32 = {d, c, b, a}}}
Note that because we use designators (value and as_int##) this part requires a C99 compiler


In order to use this integer type, we also need dozens of other functions, such as add, mul, sub, div, mod, and, or, xor, lsh, rsh, pow, etc., just to match the functionality usually associated with C integer types, and from there the possibilities are endless. A future article could revisit these functions. For now, though, I just wanted to bring focus to this integer type, especially considering how many common-place datatypes fit into an 'int128_t' such as UUID's and IPv6 addresses. We may need this sooner than we think.