jeff @ dallien.net
April 25, 2010 15:35
I wrote this post quite a while ago but never published it, I guess because I was planning to add more details. Development on Elementor seems to have stalled so maybe there are better ways out there now, but I still like how it works.
During a recent project to learn about the Sinatra framework, I needed a way to test the generated RSS feed. While looking into a Sinatra equivalent of has_tag from RSpec on Rails, I came across Elementor. This turned out to work well for both the view and RSS feed testing.
Instead of describing the CSS selectors in each test, with Elementor one can give them meaningful names in a before :each block and the tests can just refer to the names. Here is an example:
@page = elements(:from => :do_get, :as => :xml) do |tag|
tag.items 'item'
tag.guids 'guid'
tag.links 'item/link'
end
end
def do_get
get "/feed", 'feeds' => ["http://feed1.test/posts.xml",
"http://feed2.test/posts.xml"]
response.body
end
With the descriptions of each tag out of the way, the tests can be written very clearly:
it "should repeat the items' link field in the combined feed" do
@page.links.size.should == 3
@page.links[0].inner_text.should == "LINK"
end
April 24, 2010 17:38
I haven’t written any posts here in a long time. About a year ago I had a small problem: removing duplicate posts from two similar but different RSS feeds. I wrote a small Sinatra app to solve the problem, hosted it on my Slicehost slice and added the feed to my Google Reader. Over a year later my small problem has been successfully solved by my small app, and it continues to do its job daily.
In the interest of learning more about Heroku, I thought that this simple app with no database would be a good place to start.
I first signed up for an account on Heroku but got sidetracked that day and didn’t move any further towards setting up that first app. A few days later I got a reminder email from Heroku, which from most sites would annoy me, but since I really did want to set up the app, and the email included a handy “look how simple it is” set of instructions, it was actually ok.
Since my app was already in a git repository (and on github) all I had to do was install the heroku gem and follow the simple instructions.
I created a .gems file so that Heroku knows what gems are necessary to run the application. The gem file for Amalgamator looks like this:
feedzirra --version 0.0.23
rack --version 1.1.0
Running ‘heroku create’ requests the account details and then creates the app instance with a temporary name.
jeffd@jeffd-netbook:~/programming/amalgamator$ heroku create
Enter your Heroku credentials.
Email: jeff@dallien.net
Password:
Uploading ssh public key /home/jeffd/.ssh/id_rsa.pub
Creating cold-winter-66....... done
Created http://cold-winter-66.heroku.com/ | git@heroku.com:cold-winter-66.git
Git remote heroku added
My app was given an initial generated name of “cold-winter-66”, and although for a fake generated name it isn’t that bad, I renamed the app to amalgamator, making the URL http://amalgamator.heroku.com/.
I ran ‘git push heroku master’, which did a lot of the hard work, including installing the gems:
jeffd@jeffd-netbook:~/programming/amalgamator$ git push heroku master
The authenticity of host 'heroku.com (75.101.163.44)' can't be established.
RSA key fingerprint is 8b:48:5e:67:0e:c9:16:47:32:f2:87:0c:1f:c8:60:ad.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'heroku.com,75.101.163.44' (RSA) to the list of known
hosts.
Counting objects: 199, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (123/123), done.
Writing objects: 100% (199/199), 138.63 KiB, done.
Total 199 (delta 37), reused 195 (delta 36)
-----> Heroku receiving push
-----> Installing gem feedzirra 0.0.23 from http://gemcutter.org,
http://gems.rubyforge.org
Building native extensions. This could take a while...
Building native extensions. This could take a while...
Successfully installed nokogiri-1.4.1
Successfully installed sax-machine-0.0.15
Successfully installed curb-0.7.1
Successfully installed loofah-0.4.7
Successfully installed feedzirra-0.0.23
5 gems installed
-----> Installing gem rack 1.1.0 from http://gemcutter.org, http://gems.rubyforge.org
Successfully installed rack-1.1.0
1 gem installed
-----> Sinatra app detected
Compiled slug size is 1.4MB
-----> Launching...... done
http://amalgamator.heroku.com deployed to Heroku
Without any messing around or configuration, the app ran right away. And a Heroku fan was instantly created.
Now that the site is up and running elsewhere, I disabled the copy running on my Slicehost server so I can use those resources for something else. To keep any links or search engine results that may already exist working, I set up some Apache RewriteRules to redirect requests from the jeff.dallien.net app to the Heroku one.
PassengerHighPerformance off
RewriteEngine on
RewriteCond %{QUERY_STRING} ^feeds=(.*)&feeds=(.*)$
RewriteRule ^/amalgamator/feed http://amalgamator.heroku.com/feed?feeds[]=%1&feeds[]
=%2 [R=301,L,NE]
RewriteRule ^/amalgamator(/?) http://amalgamator.heroku.com/ [R=301,L]
Turning the PassengerHighPerformance option off was necessary to prevent the RewriteRules from being ignored. This requirement is explained in the Passenger documentation. The version of Amalgamator I’ve setup on Heroku is running on a newer version of Sinatra than I had last deployed to the old location, and that upgrade required a slight change to the format of the parameters the application accepts. The first RewriteRule takes an old style request, redirects it and adjusts the parameters simultaneously. The second rule redirects any other requests for the main page over to the Heroku app.
Now that I’ve seen how easy it is to get apps up and running on Heroku I’m thinking about the next one I’m going to create. I’ve got an idea for one using gameday_api, a ruby library for accessing MLB scores and statistics by Timothy Fisher.
March 22, 2009 04:53
Now that I have determined for myself that writing a web app in Prolog is a reasonable thing to do, I am trying to decide what direction to take it in. What I’ve done so far is get a specific application written but I’d like to package up some code and come up with a pattern or set of conventions for designing and deploying Prolog web apps to make things easier for others.
Due to the fact that most of my experience with web apps is with Rails, I was tempted to just start organizing things the way a Rails application would be organized. I don’t want to just write a Rails clone, and I especially don’t want to do that just because its the thing I thought of first. To get some inspiration for possible structure, features and philosophy I could use in my code, I decided to explore what else is out there. My first stop is Sinatra.
For a long time I was idea starved when it came to new programming projects. The technical know-how was there, but what to build? Luckily these days I’ve got a list of ideas that I can’t keep up with. I picked something that would be useful to myself and that I could write fairly quickly.
I wanted to write an RSS feed joiner, something that would take two RSS feed URLs and merge them into a single feed, and most importantly, would remove duplicate articles from the newly created feed. The inspiration for this comes from the CBC’s Canadian news feed and Nova Scotia news feed. Both are interesting to me, but every day a few articles will show up in both feeds that are exactly the same. These are either some national issue that has local relevance or vice versa.
A Friday evening and a Saturday later, the resulting Sinatra application is Amalgamator. You know it is free software because no marketing department would name it that. The code is on GitHub and a deployed instance is also available to use as part of this site. To parse the RSS feeds I used the Feedzirra gem for the first time and I found it simple to use and encountered no problems with it. I used RSpec for Test Driven Development and I deployed using Passenger.
I really enjoyed using Sinatra and it really felt suitable for this type of small application. Some of the things I could see that may influence my Prolog code are:
require 'rubygems'
require 'sinatra'
get '/hi' do
"Hello World!"
end
and a Hello World example for SWI-Prolog’s HTTP library:
:- http_handler('/hi', hello_world, []).
hello_world(_Request) :-
reply_html_page([], [p('Hello World!')]).
One way I think I will ultimately differ from Sinatra is its lack of helpers for building HTML, such as links, image tags, and forms. I think I will want to add some of these to my code to make generating HTML easier in Prolog.
One of the reasons that Sinatra works as a minimal framework is the large number of Ruby libraries available as gems to do so many of the things a web (or any) application needs to do. Sinatra provides the basics and then the developer can bring in just what they need. This won’t work the same way in Prolog. There are Prolog libraries out there, but not to the extent they are being developed and released for Ruby.
Overall, I think having something similar to the style of a Sinatra application would be really good for those wanting to get an existing Prolog application on the web. I think my small detour to Sinatra was very beneficial. I gained some perspective on framework design and I also moved an idea off my todo list and into a useful application.
March 20, 2009 19:00
In preparation for working on the prologblog.com CSS, I wanted to change my Apache config to serve static files like CSS and JavaScript before those requests got passed to the Prolog application.
Adjusting the rewrite rules I was already using for a Rails application, I changed the Prolog Blog config to use this:
# Redirect all non-static requests to Prolog
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ http://127.0.0.1:8000%{REQUEST_URI} [P,QSA,L]
and I was then able to remove my Proxy commands I used originally:
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/
March 20, 2009 04:51
The chain continues, though I won’t be writing much about it tonight. I updated the front page of the Prolog Blog to use a page that is processed using the templating system I’ve extracted from Prolog Server Pages over the last couple of days.
The most important addition for today is an RSS feed for the Prolog Blog site. The templated files made it easier to implement, but I’m not terribly happy with the way I had to handle escaping the HTML tags within the RSS feed. My preferred method gave me a lot of trouble due to backtracking while looping through each post. I’ll probably revisit that in the next few days.
March 19, 2009 05:42
The latest work on my chain is to extract the templating system from the Prolog Server Pages project so that it could be used to process Prolog inside any file, not just an HTML file, and also so that it could be used without the session management and other Prolog Server Pages specific features.
I’ve added the extracted templating code to the Prolog Blog code on GitHub, though it could probably become its own project in the future which can then be used as part of any new versions of Prolog Server Pages.
Here is an example template:
greeting_noun(Noun) :-
Noun = 'world'.
/*<html>
<body>
<?, greeting_noun(X) ,?>
Hello <?= X ?>
</body>
</html>*/
The file can contain both a section of normal Prolog code and a document to be processed, contained in comments (so that SWI-Prolog ignores it). Similar to ERB, there are special tags for embedding code, <? ?>, and for embedding code that will generate part of the page, <?= ?>.
As one would probably expect, the result of the processed file will be:
<html>
<body>
Hello world
</body>
</html>
March 18, 2009 03:58
I mentioned to the wise Chris Strom this morning how I occasionally search Twitter for mentions of Prolog, and he suggested that I add a Twitter search for it to my Google Reader. Shortly after doing so, it turned up a conversation directly relevant to my ongoing chain.
The linked page is an extensive discussion by Benjamin Johnston on the background and motivation behind Prolog Server Pages, a method of embedding Prolog code within an HTML page. This seems to be close to what I was hoping for on my wishlist the other day.
The implementation of Prolog Server Pages given seems to be a bit more than just a simple templating system but also provides session management as well. The predicates provided as part of the SWI libraries for generating HTML do have the advantage that they make it more difficult to produce invalid HTML. However, I am definitely going to use the templating portion as part of the Prolog Blog code. Not only will it save time in writing the pages the first time, I think it will make the pages in the site easier to understand and maintain. I have almost finished implementing an RSS feed as a templated file.
Having more Prolog code related to the web in any way is great too, as the ultimate goal of all this work is for me to learn new Prolog techniques and innovative uses of Prolog. I also realized that I need to continue searching for information about interfacing Prolog with the web, since I missed the work on Prolog Server Pages previously.
March 17, 2009 18:25
From the top of the README for the SWI-Prolog ODBC library, just above some Microsoft SQL Server-specific notes:
SWI-Prolog ODBC interface I once thought Prolog was poorly standardised, but now I know better. SQL is very poorly standardised.
March 17, 2009 02:43
Every incoming request to a server using the SWI-Prolog HTTP libraries has access to a request term, provided by http_handler/3, which contains the parameters from the request, along with all the other pieces of information one would expect the request to have (path, user agent, HTTP verb). The parameters are retrievable directly from the request, and could be parsed manually.
member(search(Params), Request).
I’m not entirely sure why the parameter term in the request list is called ‘search’. To save the work of parsing the parameters manually, the SWI http_parameters library includes predicates to do this, namely http_parameters/2 and http_parameters/3. The shorter of the two is just a convenience predicate that allows for omitting a list of options.
% include the http_parameters module
:- use_module(library('http/http_parameters')).
% calling:
http_parameters(Request, ParamsList).
% is the same as:
http_parameters(Request, ParamsList, []).
To retrieve the value of a parameter, pass the request term, and a list of terms with the parameter name and a variable. For example, if the parameter is called ‘page’:
http_parameters(Request, [page(Number, [])]).
In this example the Number variable will hold the value of the ‘page’ parameter. The empty list after the parameter is another list of options, this one is not optional. Omitting this list of options will result in a cryptic error message like the following:
Undefined procedure: http_parameters: (-)/2 In: [23] http_parameters: (-)/2 [22] http_parameters:fill_parameter/3 at /usr/lib/swi-prolog/library /http/http_parameters.pl:104
The types of options that can be included with each parameter include a default value, whether a parameter is optional (no error is thrown if an optional parameter is missing) and various conditions and type conversions. The full list of options is available in section 3.5 of the SWI-Prolog HTTP manual. Here is an example of a parameter named page with a default value. The Number variable will either be unified with the actual value of the page parameter or with the value 1 if the parameter is not present.
http_parameters(Request, [page(Number, [default(1)])]).
March 16, 2009 01:47
I’m not going to write much to add to the chain tonight, other than to say that prologblog.com is now up and running. The server code is on GitHub. As is the code that generates the first post. The site’s not pretty, and the list of things to do is a mile long, but the basic goal has been achieved, the site is running on a Prolog application.
I’ll go through the code and explain each part in subsequent posts, but here is a small introduction. This piece of code gathers all the post/1 predicates and flattens them into a single list. Using this method, new posts can been added to the system just by loading another source file with one or more post predicates into the interpreter. I’ll be using this method to add posts until I get some form of database access working.
all_posts(List) :-
setof(Post, post(Post), TempList),
flatten(TempList, List).
To make this work properly the main server source file has to declare that the post/1 predicates can be found in multiple files, using the multifile/1 predicate:
:- multifile post/1.