Sunday, August 16, 2009

Fuzzy Scrollbars

After several weeks away from this blog, I would like to start up again with the exciting field of fuzzy logic. Most introductions to fuzzy logic talk about how everything is different, and overview the three major types: Gödel, product, and Lukasiewicz, but I'm going to take a different approach. I'm going to stick with product fuzzy logic for the remainder of this post.

Product Fuzzy Logic

It is important to discuss domains when regarding operations, and fuzzy logic is somewhat consistent in its use of the domain [0, 1], which includes 0, 1 and all the real numbers in between. Let's call this the Clamp datatype. There are obvious mappings to and from Clamp and Bool: 0 maps to False, 1 maps to True, and the in betweens can be handled on a case-by-case basis (possibly to throw and error or something).

The operations on the Clamp datatype would be both arithmetic and logic operations, which would include addition, subtraction, multiplication, division, conjunction (and), and disjuction (or). How would these be defined in product fuzzy logic? According to the product T-norm (the fancy name for how "and" is defined), the logic operations would be defined as

  • not(x) = 1 - x
  • and(x, y) = xy
  • or(x, y) = x + y - xy
  • implies(x, y) = 1 - x(1 - y)

One interesting thing about product fuzzy logic is that there are multiple ways of defining equiv, depending on what properties you are looking for. One possibility is that equiv(x, y) = and(implies(x, y), implies(y, x)), which would mean that it would have to be defined as equiv(x, y) = (1 - x - y + 3xy - yx2 - xy2 + x2y2). This is quite a mess, and the same effect (in terms of mapping to Bool) can be accomplished by the definition equiv(x, y) = (1 - x - y + 2xy).

Other considerations for operations on the Clamp datatype are that when performing arithmetic operations, the output of the normal operation must be minimized or maximized to fall within the range [0, 1]. This may have the unfortunate side effect of many run-time errors, or unexpected mathematical outputs. If all conversions to and from the Clamp datatype are excplicit, then this is not a problem.

Scrollbars

Just as the String is the basis for the text field widget, the Clamp datatype can be thought of as the basis for the scrollbar widget. Scrollbars are usually tied to a viewport, which can contain an image, a document, or a webpage. Usually the viewport has all the information that the scrollbar needs to do its job. It has height and width information of the entire image, and the height and width information of the part that is showing. Using this information, the only additional information that needs to be stored in the scrollbar is the percentage of how far down on the scrollbar you are, which is equivalent to what the Clamp datatype provides.

Colors

Why is it that such a fundamental datatype like Clamp is not found in more programming languages? Usually it is defined as equivalent to Float or Double, and intended to be used with values between 0.0 and 1.0, but no strict validation or bounds checking are done to ensure that this is so. One example where this kind of usage is found is in OpenGL, which we will talk about next.In OpenGL, one of the datatypes that is pervasive throughout the API is the GLclamp datatype. This is also a value between 0 and 1, and also where I got the name from. With this datatype, OpenGL defines some texture and image datatypes as structure with members of this type, for example a color in OpenGL is a structure with 3 GLclamps.

With so many applications, the Clamp datatype is a prime example of a design goal that I've been trying to find a name for. When abstractions are too low-level, then you may have few abstractions (like bit and byte), but using these abstractions is a nightmare (just look at assembly code). When abstractions are too high-level, then you may be able to use them easily (like drag-and-drop database form designers), but the number of abstractions makes a steep learning curve, and a high pressure on the discovery of abstractions other people have made (for example: some game libraries have 3 types of fog, and dozens of types of meshes, depending on whether you want to add skeletons or water ripples, etc.). The design goal I'm trying to acheive is a balance of these two extremes that balances low-level abstraction and high-level abstraction in such a way that there are few abstractions, and using them is easy. I think Clamp datatype meets both of these requirements.

From fuzzy logic to colors to scollbars, Clamp seems to form the basis of an unrecognized basis for computation that may be useful in the future.

Monday, June 15, 2009

Embedded OpenGL

OpenGL is a graphics library that has provided a foundation for many other high-level graphics libraries built on top of it. Because it is so versitle and low-level, it has been proven to stand the test of time, and has evolved to handle the changing demands that library authors have been placed on it. As demands have changed, new functions are added, and one trend between all versions of OpenGL is that more specific functions are being replaced by more general functions. For example, the functions glColorPointer, glNormalPointer, and glTexCoordPointer have been augmented with glVertexAttribPointer, which can be used to replace all of the previous functions. This has led to a completely different view of OpenGL that has led to the development of smaller profile of OpenGL functions called the Embedded Specification, or OpenGL-ES for short. One side effect of this is that the more abstract functions are increasingly similar to an object-oriented library, which means there are fewer and fewer functions that do not fit into an object model of OpenGL. If we partition the functions found in OpenGL-ES 2.0 into an object-oriented hierarchy, then we arrive at the following classes:

  • Blend
  • Buffer -- pixel buffers, vertex buffers, and index buffers
  • Capability -- abstract class with glEnable and glDisable
  • Clear -- clearing the background to a solid color
  • Color
  • CullFace -- consists of glCullFace and glFrontFace
  • Depth
  • FrameBuffer
  • Get -- reading global state with functions like glGetString etc.
  • Mipmap -- utilities for the Texture class
  • Program
  • RenderBuffer
  • Shader -- vertex and fragment shaders
  • Stencil
  • Texture
  • Uniform -- uniform variables
  • VertexAttrib -- vertex attributes

after removing these functions from the OpenGL-ES 2.0 API, we are left with the miscellaneous functions:

  • glDrawArrays(mode, first, count)
  • glDrawElements(mode, count, type, indices)
  • glFinish()
  • glFlush()
  • glLineWidth(width)
  • glPolygonOffset(factor, units)
  • glSampleCoverage(value, invert)
  • glPixelStorei(name, param)
  • glReadPixels(x, y, width, height, format, type, pixels)
  • glScissor(x, y, width, height)
  • glViewport(x, y, width, height)

This says a lot about OpenGL-ES, especially the fact that glDrawPixels has been removed. This means the only way to copy pixels into an OpenGL-ES context is to apply a texture with the pixel data to a rectangle. However, OpenGL-ES forbids GL_QUADS, so you have to use GL_TRIANGLE_FAN to accomplish the same result. glReadPixels, glScissor, and glViewport can be related in that they all accept a rectangle as an argument.

Although OpenGL-ES allows one to do everything that can be done in OpenGL 2.0, there is more of a suggestion to use more general interfaces, like shaders and buffer objects, since all of the fixed functionality has been removed. This means that it is possible to reimplement all of OpenGL in OpenGL-ES. However, since ES is still rather new, there have not been any such implementations of fixed functionality in terms of shaders or the like. There are some tutorials that show how to duplicate such simple functionality as colors in terms of shaders, but a complete reimplementation is still nowhere to be found.

Monday, June 8, 2009

Semantic MathML

I don't know what they were thinking, but there is a much better way to encode MathML in RDF. One can be tempted to assign an RDF property to every element in MathML, but that wouldn't be the OpenMath way. Since MathML3 is getting more and more dependent on OpenMath, it seems appropriate to encode as much as possible using this combined system. MathML3 is split up into several sections: Presentation, Pragmatic Content, and Strict Content. The last one requires only 10 XML Elements to be understood by MathML3 processors, namely: m:apply, m:bind, m:bvar, m:csymbol, m:ci, m:cn, m:cs, m:share, m:semantics, m:cerror, and m:cbytes. This provides for a great economy of thought, and a chance to easily define a total mapping from MathML to RDF. MathML3 already defines a mapping from MathML2 to Strict Content MathML3, so this is the only set of Elements we need to consider. These are the prefixes we will use:

@prefix ari: <http://www.openmath.org/cd/arith1#> .
@prefix fns: <http://www.openmath.org/cd/fns1#> .
@prefix sts: <http://www.openmath.org/cd/sts#> .
@prefix sm: <http://example.com/SemanticMath/> .
@prefix m: <http://www.w3.org/1998/Math/MathML> .

These are the rdfs:Class's we will define:

  • sm:Content (all MathML Content)
  • sm:Number (for <cn/>, subclass of sts:NumericalValue)
  • sm:String (for <cs/>, subclass of xs:string)
and these are the rdf:Property's we will define:
  • sm:apply :: Property * sm:Content
  • sm:applyTo :: Property * (List sm:Content)
  • sm:bind :: Property * (List sm:Content)
  • sm:bindOp :: Property * sm:Content
  • sm:bindIn :: Property * sm:Content
  • sm:error :: Property sm:Error sm:Content
  • sm:errorWas :: Property sm:Error (List sm:Content)
Strict Content MathML3 can be translated with the following algorithm:
  • <m:cn> NUM </m:cn>
  • "NUM"^^sm:Number
  • <m:cs> TEXT </m:cs>
  • " TEXT "^^sm:String
  • <m:ci> Name </m:ci>
  • Use blank node identifier (like _:Name)
  • <m:csymbol> Symbol </m:csymbol>
  • Use URIs (http://www.openmath.org/cd/arith1#plus for <m:csymbol cd="arith1">plus</m:csymbol>, or the value of the definitionURL for MathML2)
  • <m:share/>
  • Use URIs
  • <m:cbytes> DATA </m:cbytes>
  • " DATA "^^xs:base64Binary
  • <m:cerror> Symbol Content* </m:cerror>
  • [] rdf:type sm:Error ;
    sm:error Symbol ;
    sm:errorWas LIST(Content) .
  • <m:apply> Symbol Content* </m:apply>
  • [] sm:apply Symbol ;
    sm:applyTo LIST(Content) .
  • <m:bind> Symbol Vars* Content* </m:bind>
  • [] sm:bindOp Symbol ;
    sm:bind LIST(Vars) ;
    sm:bindIn LIST(Content) .

This allows all of Strict Content MathML to be encoded in a set of RDF triples. In order to have the full semantics of MathML and OpenMath there would have to be an OM-interpretation, OM-entailment, and so on, just as there is with every other application of RDF. Although I have not worked out the details of this, an OM-interpretation of an RDF graph that contains these properties would have to treat sts:mapsTo special, and maybe map it to an appropriate rdfs:Class. As an example of how this might be used, this is how the Haskell code (\x -> x + 2) would be translated into RDF using the properties listed above as follows:

_:f sm:bindOp fns:lambda ;
sm:bind LIST(_:x) ;
sm:bindIn LIST(
[ sm:apply ari:plus ;
sm:applyTo LIST(_:x "2"^^sm:Number) ]).

Now that RDF has lambdas, the possibilities are endless!

Sunday, June 7, 2009

Subtly Different Linked Lists

If you don't know what a linked list is, then you probably shouldn't be reading this. However, the idea is very simple: a linked list is a list constructed from smaller lists of length two. So a list of length 3 would be constructed as (A, B, C) = (A, (B, (C, Nothing))), where Nothing would indicate that we have reached the end of the list. Most programming languages have some kind of linked list datatype. C++ has the list<T> type in STL, Lisp has the 'list type, Haskell has the [T] type, and RDF has the rdf:List type. We will not consider sequences here, so C types will be left for a future article.

Without any type restrictions, such a pair need not be required to make a list. For example, we could make the structure ((A, B), C), but we could not interpret it as a list. This is exactly how the Common Lisp cl:cons constructor works. The Common Lisp HyperSpec calls anything constructed with cl:cons a list. The special case where the second element of cl:cons is either another cl:cons or a cl:nil is called a proper list. This restriction makes a subtype which can always be interpreted as a list. This subtype corresponds to the lists found in scripting languages such as Perl, Python, and Ruby. RDF lists can also be described as proper lists, since an RDFS-interpretation requires that the range of rdf:rest is rdf:List, so any attempt to make an improper list with rdf:rest will result in an inconsistent RDF graph.

With the restriction that the second part of a pair is a list, we obtain so-called heterogeneous lists, because the members of the list (encoded as the first part of each pair) can be of any type. If we also enforce the restriction that each member of the list is of the same type, then we obtain what is called homogeneous lists for obvious reasons. This subtype is what is found in more strict languages, such as list<T> in C++ and [T] in Haskell. This is an overview of the different kinds of lists we have talked about:

  • Improper list (a, (b, c))
  • Proper list (a, (b, (c, ())))
    • Heterogeneous list [1, "message"]
    • Homogeneous list [1, 2, 3]

While we have talked about lists before, restricting RDF's heterogeneous lists to obtain homogeneous lists. However, in this article we are going to consider generalizing RDF's proper lists to obtain improper lists as well. In order to do this, we will make a distinction between cl:Cons the Class and cl:cons the constructor. First we need rdf:List rdf:subClassOf cl:Cons so that they can work together, and then cl:Cons will represent both improper lists and proper lists, and rdf:List will represent proper lists only. To represent an improper list, would would have to be an instance of cl:Cons but not rdf:List. For a heterogeneous list, one would have to be missing the ex:listType property, and for a homogeneous list, one would require the presence of the ex:listType property. This covers all the different list types discussed above.

Saturday, June 6, 2009

HTML Script Datatypes

While wandering through the W3C specification for rdf:text, I realized a similar notation could be used for script objects, but there were too many pieces missing from the standards to connect everything together. So I will try to connect these ideas as best I can. HTML 4.01 defines multiple datatypes, some of which are redefined by CSS, such as Color and Length. The one that cought my attention was Script, which could be used to represent scripts written in different languages.

UNIX has long since had this datatype built in to almost every Shell interpreter. The so-called Hash-Bang system allows you to have programs written in any language that do not need to be compiled, but are passed to the appropriate program for interpretation. While RDF has two types that are very similar, it doesn't have a type for scripts or functions. Using a mixture of JavaScript and HTML datatypes, we can express a function definition in RDF as follows:

 @prefix hdt: <http://www.w3.org/TR/html4/types.html> .
@prefix js: <urn:ecmascript:> .

_:f rdf:type js:Function .
_:f js:Call """
function () {
this.done = true;
} @application/ecmascript"""^^hdt:Script .

Here you can see the @ notation is used to separate the script data from the media type, just as it is in the rdf:text datatype. This would allow a mixture of data (RDF graphs) and code (JS scripts), which are the beginnings of a full-fledged programming language.

Wednesday, June 3, 2009

XML Infoset Models

The XML Information Set (Infoset) defines a number of classes that can be used to model XML documents, and to some extent, this is exactly the model used in other W3C specifications. However, some specifications use less or more than this set of classes, and it is insightful to compare them in order to find common subsets, and exhaustive supersets. In the comparison below "Info" means XML Infoset and "Path" means the XPath/XQuery Data Model. Also, the abbreviations (Y = yes/required, N = no/unspecified, O = optional) are used.

Class Name XMLDOMEXIInfoPath
AttributeYYYYY
CDATASection YYNNN
CharacterData YYYYN
Comment YYOYY
DocumentYYYYY
DocumentFragment NYYNN
DocumentType YOOYN
ElementYYYYY
Entity YOOYN
EntityReference YOOYN
NamespaceYOOYY
Notation YONYN
ProcessingInstruction YOOYY
TextYYYYY

Technically, XML Infoset does not include a CharacterData class, but includes a Character class that can be made into a list, which would be isomorphic to the CharacterData class. DOM only requires the optional classes if hasFeature("XML", "1.0") is implemented, so for all XML DOM implementations, these are not optional. Also, EXI requires the Namespace class, even though preserving the namespace prefix strings is optional. XPath has the smallest model, which only supports 7 of the above classes.

It is also interesting to note that Comments, Namespaces and ProcessingInstructions are not forbidden by any of the standards considered above. Also, as you can see, the only classes required by all W3C models are: Attributes, Elements, Documents, Namespaces and Text. Why all the bloat?

Saturday, May 30, 2009

Set Exponentiation

Exponentiation is the operation of multiplying the same element by itself a given number of times. Defined by the relation yx = yyx − 1, it can be evaluated by multiplication until y1 is encountered, which is just y. It can then be extended to rational numbers by solving an equation, then to the real numbers by the limit of a sequence, then to complex numbers by Euler's formula. Complex exponentiation is one of the most pervasive operations in all of mathematics. It can express all trigonometric functions such as sin, cos, tan, sec, csc, cot, and so on. It can express powers, polynomials, power series, Fourier series (with the help of addition), and much more. Simply put, it's everywhere.

Exponentiation can even be extended to matrices with the help of the equation YX = exp(log(Y)X) for square matrices, where exp and log are defined in terms of the usual power series. Exponentiation can also be extended to sets by noticing that the cardinalities of the domain |X| and codomain |Y| of a function, give |Y||X| = |YX| when exponentiated. This is the idea behind the notation of a power set (denoted 2X), which can be thought of as the set of all functions from X to a boolean (which represents whether or not it is contained). This is illustrated by the following maps in JSON syntax:

  • {"a": 0, "b": 0, "c": 0}
  • {"a": 0, "b": 0, "c": 1}
  • {"a": 0, "b": 1, "c": 0}
  • {"a": 0, "b": 1, "c": 1}
  • {"a": 1, "b": 0, "c": 0}
  • {"a": 1, "b": 0, "c": 1}
  • {"a": 1, "b": 1, "c": 0}
  • {"a": 1, "b": 1, "c": 1}

Clearly, the number of such functions is 8, the cardinality of the domain is 3, and the cardinality of the codomain is 2, and we all know 23 is 8. Following this to its logical conclusion, we arrive at a justification for currying. Currying is the concept that a function of two variables that returns a value is the isomorphic to a function of one variable that returns a function of one variable that returns a value. In set-theory notation this would be ZX×Y = (ZY)X, and in Haskell notation this would be X -> Y -> Z. So just as complex exponentiation is everywhere in Mathematics, so too is set exponentiation everywhere in most computer programming languages, whether or not they recognize the concept of currying. With this in mind, if the theory of algebraic datatypes includes type exponentiation, then pretty much all functions are accounted for. This is the approach that the Z notation specification language, because it defines functions as equivalent to their relations. In other words f(x) = y iff (x, y) is a member of f. To illustrate how set exponentiation characterizes many concepts in programming languages, here is a list of common structures:

  • 2X - Predicates - functions that return a boolean.
  • X2 - If/Then/Else - functions from booleans to a value.
  • Xn - Switch/Case - functions from an enumeration to a value.
  • YX - Unary Functions - functions from any value to any value.
  • XX - Unary Operations - functions from a value to a value.
  • XX2 - Binary Operations - functions from two values.
  • XXn - n-ary Operations - functions from many values.
  • 2X2 - Binary Relations - also called multi-functions.
  • 2Xn - n-ary Relations - also called multi-functions.

This is quite a vast array of functions, which cover most basic structured programming concepts like if/then/else and switch/case. But there is one more: XXX. What does this stand for? Well, conceptually, it takes a Unary Operation over X, and gives a value from X. This is exactly what happens when you evaluate a function at a value, which means the "evaluate at 0" function is in XXX. What's more, is that this is the beginnings of a set-theoretic Tetration (my other website), which I could talk about for hours, but I will save it for another time.

Nonstandard Namespaces

XML namespaces are inherently vendor-neutral, and as such, anyone can define one, even me. So in order to help standardize XML dialects that do not define a namespace, I have put together a list of document types that really should have a namespace, but have not taken the time to define one.

r6rs (Scheme R6RS)
http://www.r6rs.org/final/html/r6rs/r6rs.html
r5rs (Scheme R5RS)
http://www.schemers.org/Documents/Standards/R5RS/
cl (Common Lisp)
http://www.lispworks.com/documentation/HyperSpec/
cs (CS, CEL)
http://www.crystalspace3d.org/2000/cs_world
x3d (X3D, VRML)
http://www.web3d.org/specifications/x3d-namespace
pl (Apple PList)
http://www.apple.com/DTDs/PropertyList-1.0.dtd

There really isn't much to say about these. Some of them are based on resources which describe the XML dialects, whereas others (like x3d-namespace) are a version-neutral namespace (while some X3D implementors have used x3d-3.0.xsd or x3d-3.2.xsd, for which files actually exist). If anyone thinks some of these are poor choices, then let me know, and I'll change it.

Webizing Common Lisp

Lisp is a very old language, but it is by no means old in the derogatory sense. Being the oldest language (except Fortran) has given it time to mature and grow into a language capable of amazing things. It has been said many places that Lisp and XML are similar in that they are both data (and if you include XAML and XSLT, code too). I would argue, though, that Common Lisp has vastly more features. However, to make a true comparison, we have to webize Lisp. At the very least, this requires a URI for every symbol. We can start by giving a namespace to all symbols in the COMMON-LISP package.

http://www.lispworks.com/documentation/HyperSpec/

For convinience, we will use the cl prefix for this namespace. We can now start relating Web standards and Lisp, or even writing ontologies using Lisp functions.

Lets compare lists, which are a fundamental part of Lisp. In Lisp, a cons (x . y) would correspond to the RDF graph given by [ a rdf:List ; rdf:first _:x ; rdf:rest _:y ] and cl:nil would correspond to rdf:nil. We can do this with almost all Lisp syntax, except perhaps for the syntax based on the hash (#) symbol. For example, #c(3 4) would correspond to the MathML <cn type="complex-cartesian">3<sep/>4</cn>, so the mapping can get complicated.

Considering such a mapping between Lisp and XML has many advantages. One thing they both have in common is the idea of homogenous data, however, when viewed as two separate syntaxes, they are incompatible, because XML would be a string in Lisp, and Lisp would be a string in XML, losing all structure. When considering such a mapping, it may seem equally advantageous to express XML in Lisp, and conversely, Lisp in XML. If only it were a bijection...

Friday, May 22, 2009

The Web Metamodel

The Object Management Group (OMG) likes to consider classes in a 4-level model, in which there is data, metadata (models), metametadata (metamodels), etc. This is way too confusing, so it is fortunate that W3C's standard only take up roughly 3-levels (much like Lisp). The Resource Description Framework (RDF) defines 3 metaclasses that do not fit nicely into OMG's architecture, so they span several modeling levels. This can be seen in the image below, where containment indicates a rdf:subClassOf relationship, and the arrows indicate a rdf:type relationship.

Although there was no indication in the standards that owl:Class is an instance of rdfs:Class, it is a logical consequence, because owl:Class is a subclass of rdfs:Class, and rdfs:Class is an instance of itself. However, the same could be said of rdfs:Datatype, for which there is an explicit mention that rdfs:Datatype is both a subclass and an instance of rdfs:Class. There was also an issue about this, which decided the reason for owl:Class was to be distinct from rdfs:Datatype, not because of some cyclic dependancy stuff. So to keep the model simple, and to minimize cycles, we can ignore the fact that rdfs:Datatype is an instance of rdfs:Class, because this is implied by the fact that rdfs:Class is an instance of itself.

To overview some of the triples in this model, the standards say

  • rdfs:Class rdf:type rdfs:Class .
  • owl:Class rdf:subClassOf rdfs:Class .
  • rdfs:Datatype rdf:subClassOf rdfs:Class .
  • rdfs:Class rdf:subClassOf rdfs:Resource .
  • rdfs:Resource rdf:type rdfs:Class .

Notably, we have ignored the rdf:Property class, which is part of the model level, not the metamodel level. So the image above is an accurate depiction of the Web metamodel, which consists of only four metaclasses: rdfs:Resource, rdfs:Datatype, rdfs:Class, and owl:Class. Aside from owl:DataRange, this is a fairly complete picture of the Web metamodel. For beginners, the distinctions between these five metaclasses are essential before more indepth discussions take place. If one has any confusion about these distinctions, then nothing else about OWL will make any sense.

Monday, May 18, 2009

Semantic Binary Data

There seems to be quite a few quite a few things missing from XML Schema, RDF and OWL, for example, a single unified type for binary octet-streams. The distinction between the two types used for binary octet-streams has caused more than a few issues during the implementation of these technologies. One indication is that OWL 2.0 is much more complex than the OWL 1.0, and another indication is a note in OWL 2 which talks about the consequences of the distinction between xs:base64Binary and xs:hexBinary. Both of these types are defined to have identical value spaces, but different lexical spaces. Their mutual value space is defined as all finite-length sequences of octets (numbers between 0 and 255).

If the Web metasystem is to understand anything, it must be able to understand octet-streams, since everything else is described in terms of these. Text can be viewed as an octet-stream. Pictures, movies, and music files can all be described as octet-streams, so unless XML, RDF and OWL allow clear expression of the concept of binary data, the application of these technologies will always be limited to vauge and abstract knowledge. There are many ways that a unified binary type could be brought to the XML world. One would be to make a owl:unionOf type that combined the two. Or another option would be to define an rdf:List type, and restrict the types of the members of the list to octets. Since the first option is trivial, and does not provide much structural knowledge about octet-streams, we will not consider it. The second approach does add structural knowledge, and so would be more appropriate for allowing the Web to understand more about octet-streams.

Neither RDF nor OWL allow restricting lists to be of a certain type, so in order to make a class of lists of octets, we must first have the ability to make lists of a type. To this end, we define a new rdf:Property called ex:listType. How would this property be used? To understand this, we must first understand how RDF handles higher-order types. For example, Properties in RDF can be thought of as higher-order types with two parameters. Using a Haskell-like syntax, this would mean (x :: Property a b) would correspond to the triples

_:x rdf:type rdf:Property .
_:x rdfs:domain _:a .
_:x rdfs:range _:b .

which means the appropriate way to encode list types in RDF, such as (x :: List a), would be with the triples

_:x rdf:type rdf:List .
_:x ex:listType _:a .

With this new construct, a unified ex:binary type could easily be defined as (List xs:unsignedByte). Or in simpler terms, the triple (_:x rdf:type ex:binary) would be equivalent to the triples

_:x rdf:type rdf:List .
_:x ex:listType xs:unsignedByte .

This would allow OWL ontologies about file formats, stream protocols, multimedia files, and much more. Using these two higher-order types as examples, it seems all higher-order types can be patterned after these two. Also, since it is abstract enough to represent these higher-order types, it seems that Web technologies are on the right track. RDF and OWL seem to be the most abstract way of expressing modeling ideas, even more abstract than systems like CORBA, IDL and UML. The fact that they allow expressing higher-order types such as those found in Haskell is promising. Perhaps the goal of model compilation (the OMG kind), is much closer than we think. However, the bloat of UML does more to complicate model translation than to simplify it.

Perhaps the simplicity of RDF is all we need.

Monday, May 11, 2009

Water Shortages

We have water everywhere. Water flows off mountains, collects in lakes, and comprises 70% of the Earth's surface. It is hard to imagine our world without water. So when the idea of it running out is mentioned, it seems so funny that we pay little attention. If we listen, then we find the signs of its disappearance are also everywhere.

Places all across the globe are experiencing short-term water shortages (For example, Putrajaya, Malaysia; Hechi, Guangxi, China; San Diego, California, US; Roane County, West Virginia, US; Montgomery County, Maryland, US; Durham, North Carolina, US). Nations such as Australia and Spain are already noticing significant long-term water shortages, who say

The battles of yesterday were fought over land, they warn. Those of the present center on oil. But those of the future -- a future made hotter and drier by climate change in much of the world -- seem likely to focus on water, they say.

Two great examples of works that discuss a future without water are this Spanish poem (also in English and video), and the legacy of John Titor. While John Titor has made several predictions that turned out to be false (for example, he said Y2K would be devastating), he also predicted that water would become more scarce. This seems to be exactly what we are starting to see in many cities today.

Although a total water shortage is still a very far away, water contamination may be a problem sooner. It is already starting to kill off many spicies to the brink of extinction (for example Fish and Frogs). Modern apathy may compel one to dismiss these, since they're just frogs, but ask yourself: What can we learn from frogs? Frogs (and most amphibians) are both land and water-based creatures, so they represent the future health of both fish and mammals. Since humans are mammals, that means we're next! If that wasn't enough, you might also be interested to know that the science of cryogenics (freezing people and bringing them back to life) rests on the study of tree frogs. Any selfish ego-centric person would love to get frozen when they're 80, and wake up in a hundred years when there's a cure waiting just for them. So regardless of your disposition, selfish or selfless, clean water is important.

We find that there is a growing awareness that water will not be as plentiful as it is now. Lessons given by John Titor and Tree Frogs both indicate we should reevaluate our water situation soon. Avoiding to do so would mean its scarcity would surpass that of both land and oil. Just as oil has been called Black Gold it may be inevitable that in our near future, all power struggles will be for Liquid Gold.

Images Copyright (c) 2007 Associated Press.

Saturday, May 9, 2009

The Web Metasystem

A recent TEDtalk given by Kevin Kelly came to my attention recently, which has a lot to do with the Web. In it, he gives a very interesting view of the Web, making the analogy that the number of transistors supporting it is about the same number of neurons in the human brain. Kevin Kelly says:

There is only One machine.
The web is its OS.
All screens look into the One.
No bits will live outside the web.
To share is to gain.
Let the One read it.
The One is us.

The most interesting part is when he talks about the complexity of the Web over time. If transistors are akin to neurons, then that means in 10 years the Web will be 2 brains, and in 20 years, the Web will be 4 brains, and so on. The human brain contains about 100 billion neurons, which are much more complex than a transistor in nature. I think that it would take on the order of 10 transistors to approximate a single neuron, making his estimates inaccurate.

In the long run, such an analogy could be viewed in the context of metasystem transition theory, in which evolution is not continuous as it is in Darwinian evolution, but happens in discrete steps marked by a metasystem transition. In this model, regardless of how many transistors there are in the Web at any one point in time, the evolutionary step in which the Web becomes a brain is when a control system emerges. Since the Web was designed by us, and continues to be re-structured by us. There is an obvious answer to what the control system behind the Web is. Everyone. The Web is us.

Friday, May 1, 2009

Haskell - Encodings

The ASCII character set has withstood the test of time. However, as the use of computers and the internet becomes global, it is only natural that people speak in their native language. To this end, the Unicode character set was developed to meet this need. Since Unicode uses much more than 255 code points to represent these characters, there are many ways of mapping the much larger space of 4-bytes into a sequence of one or more bytes. The two most popular schemes for doing this are UTFs (UTF-8 and UTF-16), and Guóbiāo (GB2312 and GB18030). In particular, UTF-8 is a very organized and methodical scheme, which makes it well suited for purposes other than text encoding.

XML has become terrifyingly popular in acedemic and commercial circles, and while the overhead that it incurs is usually negligible, there are some who want to use an XML model while keeping a small memory footprint. For this, the W3C has designed a binary XML format called EXI for the transmission of XML documents in binary form. Parts of this standard are not very nice (which is out of the scope of this post), whereas some parts are quite ingenious, like the integer and date-time formats.

Here are some examples of EXI (efficient XML interchange) unsigned integers. EXI is little-endian, so the As are the low bits and the Cs are the high bits.

  • 0b0AAAAAAA (EXI encoding for 1-byte)
  • 0b1AAAAAAA,0b0BBBBBBB (EXI encoding for 2-bytes)
  • 0b1AAAAAAA,0b1BBBBBBB,0b0CCCCCCC (3-bytes)

Here are some examples of UTF (Unicode transformation format) characters. UTF-8 is big-endian, so the Cs are the low bits and the As are the high bits.

  • 0b0AAAAAAA (UTF-8 encoding for 1-byte)
  • 0b110AAAAA,0b10BBBBBB (UTF-8 encoding for 2-bytes)
  • 0b1110AAAA,0b10BBBBBB,0b10CCCCCC (3-bytes)

In the C programming language, all structs can be thought of as parsers. In the Haskell programming language, one can write parsers quite easily with the help of a library called Parsec, which stands for parser combinators. Suppose we have a Bit datatype with the constructors On and Off, and suppose we have definitions for the following functions in Haskell:

bit :: Bit -> Parser Bit
bitstring :: [Bit] -> Parser [Bit]
manyBits :: Int -> Parser [Bit]
oneBit :: Parser Bit

These parsers would allow writing versions of EXI and UTF-8 parsers without any knowledge of characters. An EXI parser can be described as follows

exi :: Parser [Bit]
exi = do
firstBit <- oneBit
restBits <- manyBits 7
case firstBit of
Off -> return restBits
On -> do
highBits <- exi
return (highBits ++ restBits)

and a UTF-8 parser can be described as

utf8continue = do
bitstring [On, Off]
manyBits 6

utf8 :: Parser [Bit]
utf8 = do bit Off
manyBits 7

<|> do bitstring [On, On, Off]
highBits <- manyBits 5
restBits <- utf8continue
return (highBits ++ restBits)

<|> do bitstring [On, On, On, Off]
highBits <- manyBits 4
rests <- sequence (replicate 2 utf8continue)
return (highBits ++ (concat rests))

<|> do bitstring [On, On, On, On, Off]
highBits <- manyBits 3
rests <- sequence (replicate 3 utf8continue)
return (highBits ++ (concat rests))

As you can see, EXI has a much more elegant implementation in Haskell. Although Parsec was designed to be used on characters and strings, similar techniques can be used for byte-parsers and bit-parsers as shown above. Since C structs can be viewed as byte-parsers, it should be possible to build Parsec style parsers for some common C structs and common file formats like JPEG or MPEG. Bringing this to its logical conclusion, most file formats could be described as some kind of bit-parser or byte-parser, so if such parsers were written in Haskell, what would they look like?

Tuesday, April 28, 2009

XML - Entity References

When we go about writing a web page, often enough, we like using all sorts of characters in our text, like exclamation points (!), ampersands (&), and less than signs (<). Sooner or later, it becomes obvious that we need a way to put a character in our web pages without using the character itself. For example, the less than sign (<) is part of XML markup, so it cannot be used in the text itself, so to remedy this situation the entity reference &lt; is used. This expands to (<) when the page is viewed in a browser, so the desired effect is achieved.

The concept of XML entities was borrowed from SGML, and so their inner workings are very similar. Another borrowing from SGML is the DOCTYPE declaration. While backwards-compatibility was an important objective for XML 1.0, it is a low priority now that XML processors are ubiquitous, and rarely depend on more general SGML processing. In fact, now that XML Schema has replaced almost every aspect of the Document Type Declaration (DTD), it is likely that the use of DTDs will diminish. Since XML Schema provides a richer environment for declaring elements and defining types, it seems painful to have to use DTDs. This naturally leads to the question of whether or not DTDs can be removed from the XML specification.

Suppose DTDs were removed from the XML specification (perhaps for XML 1.5). What else would have to be changed in order to maintain consistency? Surprisingly, only entities. Everything else can be equivalently (or better) declared in XML Schema, which also provides rich types that can increase validity constraints where DTDs would not. However, XML Schema does not provide a way to define entities. So if a new mechanism for entity definition is required, then it could be one of the following options:

  • Use a simplified DOCTYPE declaration for compatibility.
  • Create a new processing instruction to include entities.
  • Let the user agent handle entities however it wants.

Of these options, the new processing instruction would be the most consistent with W3C's other specifications, such as Associating Style Sheets with XML documents. Also, it would be a chance to codify current best practice. XML allows entities to be defined internally (within the document) and externally (in another file). Most documents that use entities either depend on a standard list of entities (like HTML Entities) or directly include a file which has entity definitions. So while a hypothetical processing instruction <?xml-entity copy "&#169;"?> would do the job, it would go against everything. A hypothetical processing instruction <?xml-entities href="htmllat1.ent"?> would be more in line with current usage. Since this would introduce a new language for this external file, there would have to be at least two parts to a specification of this idea:

  • entSubset  ::= (GEDecl | Comment | S)*
  • EntitiesPI ::= '<?xml-entities' S PseudoAtt S? '?>'
    where the allowed pseudo-attributes are:
    href    CDATA #REQUIRED
    charset CDATA #IMPLIED

Another benefit of using a processing instruction instead of some other method is that you do not have to wait until a new XML processor is implemented. In the worst case scenario, you could pass the document through a tool that understands that processing instruction, and the rest can be handled by a normal XML processor. Thus, you can live in a DTD-free world without waiting for XML 1.5 ... you can have it your way, today.