Web 2.0 is taking all sorts of unexpected directions. The latest to come to my attention are two initiatives aimed at offloading processor load from server to client.
First up, Google announced a new developer project named Native Client. With Native Client, Google aims to "give web developers access to the full power of the client's CPU while maintaining the browser neutrality, OS portability and safety that people expect from web applications." As far as I can see, it's what Java Applets were originally designed to do - and look how successful they've been. At least outside the enterprise environment (where desktop configurations tend to be strictly controlled), applets suffered from the "write once, debug everywhere" syndrome and have largely fallen out of use. It'll be interesting to see whether an Internet-savvy company like Google makes a success of this new approach.
Next, a start-up named Good OS has announced Cloud, "a browser operating system". I am trying hard to get my head around that concept. They claim that it's an environment for enhancing the user's Internet experience and that it can be co-hosted on Windows, Linux or other operating systems. If I have read the scant information available correctly, the idea is that your machine will start up already logged on to your favourite portal (Yahoo, Google, Windows Live...) and that core applications such as Skype, which are accessed from a MacOS-like object dock, run within their own dedicated browser tab on top of Cloud's integrated compressed Linux kernel. Whenever you need to, you can switch to the native OS with a single key press.
Unfortunately the "Why Cloud?" page on Good OS's web site is unfinished, so it is difficult to comprehend the vision behind the product. A clue might be that it is being bundled with the Gigabyte touch-screen Netbook models and so looks like a lower-cost alternative to Windows CE. Perhaps the company is aiming for an "Internet Appliance" niche. It made a name for itself by supplying the gOS Linux operating system for the ultra-low-cost Wal-Mart PCs.
Tuesday, 16 December 2008
Monday, 15 December 2008
Testing web applications? Cucumber is the cool new kid on the block
I've had quite complimentary things to say about Canoo WebTest, but one thing it doesn't provide out of the box is a representation of the tests that your average customer is able to read and understand intuitively - that is to say, a tabular format like Fit or a narrative of the "Given... when... then... " form.
Cucumber to the rescue! I heard several people mention this at the recent XP Day in London. I hope to investigate it in more depth soon.
I would be interested to know whether it can be used to test anything other than Ruby code and whether it can drive a web browser to interact with Rich Internet Applications (RIAs). Stuart Ervine's and Nat Pryce's GWT development demo at XP Day was truly impressive. Nat has combined WebDriver with WindowLicker to create a clean interface between a test script and synchronous or asynchronous RIAs. But telling a jUnit test to move the mouse cursor into a particular widget and click the button was nothing if not verbose!
Cucumber to the rescue! I heard several people mention this at the recent XP Day in London. I hope to investigate it in more depth soon.
I would be interested to know whether it can be used to test anything other than Ruby code and whether it can drive a web browser to interact with Rich Internet Applications (RIAs). Stuart Ervine's and Nat Pryce's GWT development demo at XP Day was truly impressive. Nat has combined WebDriver with WindowLicker to create a clean interface between a test script and synchronous or asynchronous RIAs. But telling a jUnit test to move the mouse cursor into a particular widget and click the button was nothing if not verbose!
Thursday, 4 December 2008
Sending SMS texts from the PC desktop
Ever left your mobile at home and needed to send a text to someone? And even to receive that person's reply without access to your phone? Or just found that ten-finger typing is quicker than predictive TXTing with two thumbs?
I've discovered Vodafone Text Centre does the job (see feature summary in the attached message). You can install it for free, but note that the cost of SMS messages you send using Text Centre goes on to your normal Vodafone mobile account.
There's a very comprehensive manual and online help. Make sure your Outlook isn't running before you start installing (use Task Manager to kill it if necessary).
In a test here in the office, I found that my first outgoing message was delivered in just a few seconds, whereas the reply took around 10 minutes to reach my Outlook inbox (you can choose to have the reply routed to your mobile instead, of course, which is quicker but means that you don't have access to Outlook's convenience features when forwarding or filing the reply).
If you're in the habit of leaving Outlook running when you leave the office, you can also get Text Centre to alert you by SMS when you receive an e-mail (or only for high-priority e-mails) or to send you an SMS alert of upcoming appointments.
Overall, I am impressed. With Vodafone Text Centre you can:
I've discovered Vodafone Text Centre does the job (see feature summary in the attached message). You can install it for free, but note that the cost of SMS messages you send using Text Centre goes on to your normal Vodafone mobile account.
There's a very comprehensive manual and online help. Make sure your Outlook isn't running before you start installing (use Task Manager to kill it if necessary).
In a test here in the office, I found that my first outgoing message was delivered in just a few seconds, whereas the reply took around 10 minutes to reach my Outlook inbox (you can choose to have the reply routed to your mobile instead, of course, which is quicker but means that you don't have access to Outlook's convenience features when forwarding or filing the reply).
If you're in the habit of leaving Outlook running when you leave the office, you can also get Text Centre to alert you by SMS when you receive an e-mail (or only for high-priority e-mails) or to send you an SMS alert of upcoming appointments.
Overall, I am impressed. With Vodafone Text Centre you can:
- Send text messages to individuals and groups from your PC, using your existing contacts and distribution lists.
- Choose to have replies sent to your mobile phone or to your e-mail inbox.
- Receive appointment reminders via text messages.
- Choose to receive text messages to notify you of urgent e-mails.
Wednesday, 19 November 2008
Munging PDF files and pages
Having scanned over a dozen handwritten pages from my notebook into a PDF file, I had to invert every other page to make it possible to read the whole thing. It took me a while to find the answer, but YOU can go straight to the solution!
The command line is
The command line is
pdftk.exe scannedfile.pdf cat 1 2S 3 4S 5 6S 7 8S 9 10S 11 12S output readablefile.pdf
Monday, 10 November 2008
Ramaze - first impressions continued
My earlier posting was getting too long, so I'm continuing in this new one. Meanwhile, the tutorial's author has kindly left a comment on the original posting, indicating that the tutorial is due to be updated pretty soon. I look forward to that!
Chapter 11 of the tutorial can be used pretty much as-is, but I found that after an error, the redirection specified by the error method would be overwritten by the helper aspect. After some research, I discovered that the latest version of Ramaze supports a redirection status operator, redirected?. Using this, the helper aspect can be written very simply:
Now I wanted to convert the application to something more suited to enterprise deployment (and also more suited to shared hosting deployment - typically such environments provide an Apache server and MySQL database). So the first thing to do was to move from YAML to SQL - I chose MySQL.
If you haven't yet installed MySQL, do so before the next step. Take care to ensure that your PATH environment variable contains the MySQL binary as well as its libraries (hold down the Windows key and press PAUSE to bring up the system properties, then choose Environment Variables in the Advanced tab). For example, on my system the System PATH begins:
Now take a safety copy of your model file, todolist.rb, and start modifying it to use MySQL instead of YAML. Replace the following lines:
Next I found that the model didn't support the method original(), which the original simple model supports. So I supplied one myself (subsequently found to be unnecessary after refactoring main.rb):
In the index() method, we'll start using the ID of each task to identify it instead of the title. Instead of requesting an array of key-value pairs from the original() method, we get an array of task objects from the dataset() method. That of course implies that we have to extract the title and the ID from the task object and use them as appropriate:
Turning to the file todolist.rb, here are the methods I had to add to allow tasks to be added and deleted in the database:
not done
done
delete
Feel free to copy the icon images. To let the Mongrel server access these, they have to be placed in the folder "public" of your project. Then I redesigned the index() method and the corresponding index.html file very slightly to use these (first declaring some constants):
Chapter 11 of the tutorial can be used pretty much as-is, but I found that after an error, the redirection specified by the error method would be overwritten by the helper aspect. After some research, I discovered that the latest version of Ramaze supports a redirection status operator, redirected?. Using this, the helper aspect can be written very simply:
helper :aspectChapter 12 of the tutorial needs no changes, except that you don't have to add the flash section to the page template because we did it earlier.
after( :create, :delete, :open, :close ) {
redirect Rs() unless redirected?
}
Now I wanted to convert the application to something more suited to enterprise deployment (and also more suited to shared hosting deployment - typically such environments provide an Apache server and MySQL database). So the first thing to do was to move from YAML to SQL - I chose MySQL.
If you haven't yet installed MySQL, do so before the next step. Take care to ensure that your PATH environment variable contains the MySQL binary as well as its libraries (hold down the Windows key and press PAUSE to bring up the system properties, then choose Environment Variables in the Advanced tab). For example, on my system the System PATH begins:
C:\ruby\bin;Add a new database for the application and create a user account named "ramaze" with password "TodoList" for both localhost and remote-host access:
C:\Program Files\MySQL\MySQL Server 5.0\bin;
C:\Program Files\MySQL\MySQL Server 5.0\lib\opt;
C:\Program Files\MySQL\MySQL Server 5.0\lib\debug;
...
mysql -u root -pI decided to use Sequel as my database access layer. You probably also have to install the gems mysql and sequel before the code will work - I am not sure about the mysql gem, as it was one of the things I installed before I eventually discovered the need to set up the PATH correctly.
******
CREATE DATABASE IF NOT EXISTS todolist_db;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE,
DROP, RELOAD, PROCESS, FILE, REFERENCES, INDEX,
ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES,
LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW,
CREATE ROUTINE, ALTER ROUTINE ON *.*
TO 'ramaze'@'localhost'
IDENTIFIED BY 'TodoList' WITH GRANT OPTION;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE,
DROP, RELOAD, PROCESS, FILE, REFERENCES, INDEX,
ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES,
LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW,
CREATE ROUTINE, ALTER ROUTINE ON *.*
TO 'ramaze'@'%'
IDENTIFIED BY 'TodoList' WITH GRANT OPTION;
quit
gem install sequelI tried choosing the 'ruby' option, but this was unable to generate the native code on my machine. So I uninstalled it and tried again, choosing the 'mswin32' option, which worked fine.
gem install mysql
Now take a safety copy of your model file, todolist.rb, and start modifying it to use MySQL instead of YAML. Replace the following lines:
require 'ramaze/store/default'with the following lines:
TodoList = Ramaze::Store::Default.new('todolist.yaml')
require 'rubygems'To begin with, I tried using the title as the primary key. But I soon found that not only was it necessary to define the title field first and then separately to name it as the primary key, the user-supplied title was not always suitable as a key value due to the presence of shell metacharacters and so on. So I decided to go with the Sequel flow and use a system-generated ID as the primary key.
require 'sequel'
DB = Sequel.mysql('todolist_db',
:user => 'ramaze',
:password => 'TodoList',
:host => 'localhost')
class TodoList < Sequel::Model(:tasks)
set_schema do
primary_key :id
varchar :title, unique => true, :null => false
boolean :done
end
end
Next I found that the model didn't support the method original(), which the original simple model supports. So I supplied one myself (subsequently found to be unnecessary after refactoring main.rb):
# Copy all records into a listMore important was to add some initialisation code after the class was defined:
def self.original
tasks = []
self.dataset.each {|r| tasks.push [r[:title], {:done => r[:done]}]}
return tasks
end
unless TodoList.table_exists?Now try running the application. It seems to work, but nothing gets stored in the database. At this point we have to bite the bullet and refactor the main module to support a more relational view of the underlying data model.
DB.transaction do
puts "Creating table 'tasks'\n"
TodoList.create_table
end
end
In the index() method, we'll start using the ID of each task to identify it instead of the title. Instead of requesting an array of key-value pairs from the original() method, we get an array of task objects from the dataset() method. That of course implies that we have to extract the title and the ID from the task object and use them as appropriate:
def indexThe methods open(), close(), task_status() and delete() have to change because they all now take an id as parameter:
@title = ["To-Do List"]
@tasks = []
TodoList.dataset.each do |task|
id = task[:id]
title = task[:title]
if task[:done]
status = 'done'
toggle = A('Open Task', :href => Rs(:open, id))
else
status = 'not done'
toggle = A('Close Task', :href => Rs(:close, id))
end
delete = A('Delete', :href => Rs(:delete, id))
@tasks << [title, status, toggle, delete]
end
@tasks.sort!
end
def delete idI decided to override the []= method of the model, so that tasks would actually be written to the database. It was an easy step from there to create a new row in the tasks table whenever the ID parameter to this method was absent or nil. So the create() method becomes:
unless TodoList.delete id
failed "Cannot delete task no.: #{id}"
end
end
def open id
task_status id, false
end
def close id
task_status id, true
end
def task_status id, status
unless task = TodoList[id]
failed "No such task no.: #{id}"
redirect_referer
end
task[:done] = status
TodoList[id] = task
end
def createNote the check for duplicates, which is easy to do now that we can look up titles in the dataset.
if title = request['title']
title.strip!
if title.empty?
failed("Please enter a title")
redirect '/new'
end
if TodoList.find(:title => title)
failed("Task '#{title}' already exists")
else
TodoList[nil] = {:title => title, :done => false}
end
end
end
Turning to the file todolist.rb, here are the methods I had to add to allow tasks to be added and deleted in the database:
def self.delete(id)That's pretty much it. But before I stopped, I wanted to prettify the user interface a bit. I didn't like the fact that the column widths tended to change whenever the sole "not done" item was added to or deleted from the list, and in any case I preferred clickable icons to text links. So I designed some icons:
puts "Attempting to delete '#{id}'\n"
DB.transaction do
if task = TodoList.find(:id => id)
task.destroy()
else
puts "Not found\n"
return false
end
end
end
# Assignment should update the underlying database
def self.[]=(id, values)
DB.transaction do
if (id == nil || !(task = TodoList.find(:id => id)))
task = TodoList.new
end
task.title = values[:title]
task.done = values[:done]
task.save
end
end
not done
done
delete
Feel free to copy the icon images. To let the Mongrel server access these, they have to be placed in the folder "public" of your project. Then I redesigned the index() method and the corresponding index.html file very slightly to use these (first declaring some constants):
DELETE_ICON = '<img src="delete_sml.gif">'The full source code is attached in the comments to this post.
NOTDONE_ICON = '<img src="notdone_sml.gif">'
DONE_ICON = '<img src="done_sml.gif">'
# the index action is called automatically when no other action is specified
def index
@title = ["To-Do List"]
@tasks = []
TodoList.dataset.each do |task|
id = task[:id]
title = task[:title]
if task[:done]
toggle = A(DONE_ICON, :href => Rs(:open, id))
else
toggle = A(NOTDONE_ICON, :href => Rs(:close, id))
end
delete = A(DELETE_ICON, :href => Rs(:delete, id))
@tasks << [title, toggle, delete]
end
@tasks.sort!
end
<p><a href="/new">New Task</a></p>
<?r if @tasks.empty? ?>
<p>No Tasks</p>
<?r else ?>
<table>
<?r @tasks.each do |title, toggle, delete| ?>
<tr>
<td class="title" > #{title} </td>
<td class="toggle"> #{toggle} </td>
<td class="delete"> #{delete} </td>
</tr>
<?r end ?>
</table>
<?r end ?>
Saturday, 8 November 2008
Headlights
While I'm on the subject of motoring, why is it that some people insist on driving along well-lit roads at night with headlights on full-power dipped beam or even high beam? It doesn't allow them to see any better, but significantly impairs the ability of oncoming drivers to see what's near their car. Many's the time when I have been uncertain of the amount of space between a dazzling pair of headlamps on the opposite side of the road and parked vehicles on my side, so that I have had to stop. Once I nearly knocked down a guy crossing the road behind a car coming towards me - he was wearing dark clothes and was simply hidden by the glare from the headlamps.
The only situation in which I will agree that the use of something stronger than sidelights is useful is when I'm approaching a corner. Traffic coming from another direction announces itself by the headlight beam cast ahead. Yet even this is sometimes misleading, as there are plenty of idiots who will park at the side of the road leaving their headlamps switched on. As we don't need this kind of advance warning during the daytime, why should it be required at night?
I always choose sidelights or dim-dip headlights when motoring in areas with good street lighting, out of consideration to other drivers - yet they frequently flash me, assuming that I have forgotten to switch on my lights properly. Other sins include the use of fog lights front and rear when there is no fog. This is particularly dangerous when the road is wet, because they are mounted low down and the reflections off the water can almost blind other road users.
Rules 113 and 114 of the Highway Code sum up the requirements of the law, which seem perfectly common-sense:
You MUST
The only situation in which I will agree that the use of something stronger than sidelights is useful is when I'm approaching a corner. Traffic coming from another direction announces itself by the headlight beam cast ahead. Yet even this is sometimes misleading, as there are plenty of idiots who will park at the side of the road leaving their headlamps switched on. As we don't need this kind of advance warning during the daytime, why should it be required at night?
I always choose sidelights or dim-dip headlights when motoring in areas with good street lighting, out of consideration to other drivers - yet they frequently flash me, assuming that I have forgotten to switch on my lights properly. Other sins include the use of fog lights front and rear when there is no fog. This is particularly dangerous when the road is wet, because they are mounted low down and the reflections off the water can almost blind other road users.
Rules 113 and 114 of the Highway Code sum up the requirements of the law, which seem perfectly common-sense:
You MUST
- ensure all sidelights and rear registration plate lights are lit between sunset and sunrise
- use headlights at night, except on a road which has lit street lighting. These roads are generally restricted to a speed limit of 30 mph (48 km/h) unless otherwise specified
- use headlights when visibility is seriously reduced (see Rule 226)
You MUST NOT
- use any lights in a way which would dazzle or cause discomfort to other road users, including pedestrians, cyclists and horse riders
- use front or rear fog lights unless visibility is seriously reduced. You MUST switch them off when visibility improves to avoid dazzling other road users (see Rule 226)
Driving and Parking in London
If you want to keep your sanity, don't own a car in London. Occasionally however, you need one. For those occasions, I'm very much in favour of car-club schemes like CityCarClub, ZipCar and StreetCar, but none of them has a depot anywhere near my house yet. So I still own a car, but I don't use it very much. At certain times of day, I have found that it actually took me longer to drive somewhere than it would have taken to walk. And you don't have to pay a congestion charge for walking.
However, I'm currently on a project that involves flying to a customer site in Europe several times a month, leaving London City Airport very early in the morning - so early in fact that I would miss the flight if I tried to go by public transport. In the past, I have used a minicab instead, but fares have recently gone up so much that I resorted to driving to the airport myself. If you stay at least two days, you can book a space at the airport car park on the Internet and save a fair bit of money, but not for a single day - and the parking costs almost as much as the minicab.
To the rescue came a new service called ParkAtMyHouse. Such a simple yet elegant idea - anyone with a garage or a bit of parking space can advertise it on-line and you simply haggle about the price. The web site handles the reservation and communication between owner and renter of the space. There's a feedback mechanism similar to eBay. The service levies a fee of 10% of the rent paid for its services, which seems reasonable as you would not have any other way of discovering these cheap parking spaces. You can rent by the hour, day, week, month or longer. I didn't find anything very close to the airport, but only two stops away on the DLR was someone willing to rent me a space for only £5 a day - job done!
Wednesday, 5 November 2008
Ramaze - first impressions
I've spent the past couple of days evaluating Ramaze. I started with the tutorial To-Do-List example and tried to move beyond that to something resembling a real web application backed with a proper database (MySQL). In general, I feel that this framework is more intuitive to use and therefore gets results faster than e.g. Ruby on Rails, if you're having to learn both from scratch. I also found it very easy to refactor the code, because it encourages a fairly aspect-oriented style and all changes are loaded immediately without having to re-start the server. Last but not least, the designers of Ramaze have considered enterprise-level deployment requirements from the very start, so your Ramaze application can run on anything from a shared virtual host rented for a few dollars per year to an enterprise server farm or on-demand compute-cloud configurations.
Ramaze also leverages the power of Ruby in very elegant ways. For example, here's a description of a basic content-management system based on Ramaze in just a single page of code.
I'm going to walk you through the to-do list tutorial, which dates back to 31st January 2008. Ramaze has evolved a fair bit since then and I have had to find out what the changes are to adapt the tutorial to the current version. Hopefully my observations will be incorporated in a future version of the tutorial. All of my work was done on Windows XP (blame my employer, not me :-) so the installation instructions etc. are related to that.
First of all, if you don't yet have Ruby and Ramaze installed, here's how you do it:
My next big problem came with Eclipse. I wanted to use Eclipse as my IDE for Ruby and Ramaze. Trying to use the Help -> Software Updates feature to install Ruby support, I discovered by much trial and error that it was possible to do so, provided you configure the update site http://update.aptana.com/update/rails/3.2/ and select "Uncategorized -> Ruby Development Tools" and nothing else. Using this plug-in, you get a language-sensitive editor and the ability to launch Ruby applications, not much else, but it's a whole lot better than using just a text editor. The first time you try to open the Ruby perspective, the Hierarchy pane displays an error message ("Could not create the view: org/eclipse/search/internal/ui/text/ResourceTransferDragAdapter"). Just close it and use the Navigator pane instead. I'd be interested to know if there is a better Ruby plug-in for Eclipse.
While I was about it, I also installed the Web Page Editor from one of the Eclipse update sites. This came in useful when I was working on template pages later.
Then I started working my way through the tutorial. The first discrepancy came in chapter 3 - the View. Ramaze has evolved so that you supply just partial pages under the view folder. So the view/index.xhtml file in the tutorial should contain only the part of the page that will be supplied to the main page template in the variable @content. It looks like this:
Also on this page, ignore the instruction to "run ramaze" from the command line. That won't work under Windows. Instead, Eclipse comes to the rescue. Just right-click the start.rb file in the Navigator pane and select "Run As... -> Ruby Application". When you want to terminate the server, click the red stop button in the console pane. Subsequently you can re-run the server by selecting start.rb from the pull-down menu under Eclipse's green Run button.
In chapter 4 of the tutorial, again please omit the HTML scaffolding and just write
In chapter 5 of the tutorial, obviously you should not add the level-one heading as shown. Just put the link at the very top of the file view/index.xhtml.
Next, the new file view/new.xhtml is also much simplified:
In chapter 7 of the tutorial, I found that I was able to add and delete tasks to my heart's content until I tried deleting a task such as "Lunch with Bob?". Ramaze has a very simple mapping scheme from URL path to action argument, which stops parsing the path_info as soon as it encounters a "?" or ";". I spent a considerable amount of time trying to think of ways to circumvent this and pass the offending characters as url-encoded escape sequences, but in the end it was all to no avail. The lesson is that you would be well advised to never use user-supplied values as keys into your data. Instead, treat the user-supplied string as a data field and generate your own system ID for each item. By the way, this lesson applies equally well in other frameworks. Following my completion of the tutorial I will show how I refactored the code to achieve greater robustness. For now, just avoid using metacharacters in your task titles!
The solution to the problem of stuck tasks was of course to edit the YAML database file to delete the offending characters.
Chapter 8 of the tutorial seems to me now to be more or less redundant. All I had to do was to modify the file view/page.xhtml slightly to incorporate the error message from the flash object. Here's the resulting body section:
The prettification step in chapter 9 of the tutorial still applies as it stands, except that the stylesheet entries are inserted into the file view/page.xhtml instead of the Page element. I also added a style for the error message, where present. The resulting HTML header now contains the following entries:
When I tried to start Ramaze again, it complained that some other process was already listening on port 80. After some detective work with process explorer, I discovered that this was a little peer-to-peer application called kservice from Kontiki Inc. I had probably unwittingly installed it with some Internet media player. After following the uninstall instructions kindly blogged by "Geoff" I was finally able to start my server on port 80.
(to be continued)
Ramaze also leverages the power of Ruby in very elegant ways. For example, here's a description of a basic content-management system based on Ramaze in just a single page of code.
I'm going to walk you through the to-do list tutorial, which dates back to 31st January 2008. Ramaze has evolved a fair bit since then and I have had to find out what the changes are to adapt the tutorial to the current version. Hopefully my observations will be incorporated in a future version of the tutorial. All of my work was done on Windows XP (blame my employer, not me :-) so the installation instructions etc. are related to that.
First of all, if you don't yet have Ruby and Ramaze installed, here's how you do it:
- Grab the latest Ruby installer
- Install with default options
- Open a command shell and make sure that your Ruby bin directory is on the PATH
- gem install ramaze
My next big problem came with Eclipse. I wanted to use Eclipse as my IDE for Ruby and Ramaze. Trying to use the Help -> Software Updates feature to install Ruby support, I discovered by much trial and error that it was possible to do so, provided you configure the update site http://update.aptana.com/update/rails/3.2/ and select "Uncategorized -> Ruby Development Tools" and nothing else. Using this plug-in, you get a language-sensitive editor and the ability to launch Ruby applications, not much else, but it's a whole lot better than using just a text editor. The first time you try to open the Ruby perspective, the Hierarchy pane displays an error message ("Could not create the view: org/eclipse/search/internal/ui/text/ResourceTransferDragAdapter"). Just close it and use the Navigator pane instead. I'd be interested to know if there is a better Ruby plug-in for Eclipse.
While I was about it, I also installed the Web Page Editor from one of the Eclipse update sites. This came in useful when I was working on template pages later.
Then I started working my way through the tutorial. The first discrepancy came in chapter 3 - the View. Ramaze has evolved so that you supply just partial pages under the view folder. So the view/index.xhtml file in the tutorial should contain only the part of the page that will be supplied to the main page template in the variable @content. It looks like this:
<ul>Of course, at this stage you won't have assigned a value to @title, so your page will be rendered without a page title or first heading.
<?r
TodoList.each do |title, value|
status = value[:done] ? 'done' : 'not done'
?>
<li>#{title}: #{status}</li>
<?r end ?>
</ul>
Also on this page, ignore the instruction to "run ramaze" from the command line. That won't work under Windows. Instead, Eclipse comes to the rescue. Just right-click the start.rb file in the Navigator pane and select "Run As... -> Ruby Application". When you want to terminate the server, click the red stop button in the console pane. Subsequently you can re-run the server by selecting start.rb from the pull-down menu under Eclipse's green Run button.
In chapter 4 of the tutorial, again please omit the HTML scaffolding and just write
<?r if @tasks.empty? ?>Now add the following line at the very top of the index method:
No Tasks
<?r else ?>
<ul>
<?r @tasks.each do |title, status| ?>
<li>#{title}: #{status}</li>
<?r end ?>
</ul>
<?r end ?>
@title = ["To-Do List"]The standard page template inserts this value both in the page title and the top-level heading. You may have noticed that the MainController contains the line
layout '/page'This indicates that the file view/page.xhtml will be used as the template, not src/element/page.rb as described in the tutorial.
In chapter 5 of the tutorial, obviously you should not add the level-one heading as shown. Just put the link at the very top of the file view/index.xhtml.
Next, the new file view/new.xhtml is also much simplified:
<a href="/">Back to TodoList</a>Before creating an action for create, as described in the tutorial, you will also need an action for new. This is the only way that I have found to generate a title for the new.xhtml page. Insert the following in main.rb:
<form method="POST" action="create">
Task: <input type="text" name="title" /><br />
<input type="submit" />
</form>
def newChapter 6 of the tutorial remains pretty much unchanged. Just remember not to delete the @title initialisation at the very top of the index method.
@title = ["Create a new To-Do List item"]
# See view/new.xhtml
end
In chapter 7 of the tutorial, I found that I was able to add and delete tasks to my heart's content until I tried deleting a task such as "Lunch with Bob?". Ramaze has a very simple mapping scheme from URL path to action argument, which stops parsing the path_info as soon as it encounters a "?" or ";". I spent a considerable amount of time trying to think of ways to circumvent this and pass the offending characters as url-encoded escape sequences, but in the end it was all to no avail. The lesson is that you would be well advised to never use user-supplied values as keys into your data. Instead, treat the user-supplied string as a data field and generate your own system ID for each item. By the way, this lesson applies equally well in other frameworks. Following my completion of the tutorial I will show how I refactored the code to achieve greater robustness. For now, just avoid using metacharacters in your task titles!
The solution to the problem of stuck tasks was of course to edit the YAML database file to delete the offending characters.
Chapter 8 of the tutorial seems to me now to be more or less redundant. All I had to do was to modify the file view/page.xhtml slightly to incorporate the error message from the flash object. Here's the resulting body section:
<h1>#{@title}</h1>Note that there is no need for any logic to decide whether an error message is present. If it's there, it gets rendered as part of the page. Otherwise the renderer omits that element.
<div id="error">
<p>#{flash[:error]}</p>
</div>
<div id="content">
#@content
</div>
The prettification step in chapter 9 of the tutorial still applies as it stands, except that the stylesheet entries are inserted into the file view/page.xhtml instead of the Page element. I also added a style for the error message, where present. The resulting HTML header now contains the following entries:
<style type="text/css">I had real fun in chapter 10 of the tutorial. Firstly, I had to install the mongrel adapter:
body { margin: 2em; font-family:Verdana }
#content { margin-left: 2em; }
#error { margin-left: 2em; color:red; }
</style>
<style type="text/css">
table { width: 80%; }
tr { background: #efe; width:100%; }
tr:hover { background: #dfd; }
td.title { font-weight: bold; width: 60%; }
td.status { margin: 1em; }
a { color: #3a3; }
</style>
gem install mongrelFor both mongrel itself and its fastthread dependency, I chose the "ruby" option.
When I tried to start Ramaze again, it complained that some other process was already listening on port 80. After some detective work with process explorer, I discovered that this was a little peer-to-peer application called kservice from Kontiki Inc. I had probably unwittingly installed it with some Internet media player. After following the uninstall instructions kindly blogged by "Geoff" I was finally able to start my server on port 80.
(to be continued)
Friday, 31 October 2008
SOA adoption reported to be accelerating
I was interested to read this interview with Jason English, the VP of Corporate Marketing for iTKO. It shows that Service Oriented Architecture is coming of age when vendors are offering viable testing and virtualisation solutions.
It also previews the virtual conference SOA in Action, coming on November 19th, 2008, for which it looks as if it is free to sign up. "SOA in Action will feature keynotes by Gartner's Yefim Natis and Forrester's Randy Heffner, as well as a panel discussion led by ebizQ's Joe McKendrick and featuring Phil Wainewright of ZDNet and Dr. Chris Harding of The Open Group."
It also previews the virtual conference SOA in Action, coming on November 19th, 2008, for which it looks as if it is free to sign up. "SOA in Action will feature keynotes by Gartner's Yefim Natis and Forrester's Randy Heffner, as well as a panel discussion led by ebizQ's Joe McKendrick and featuring Phil Wainewright of ZDNet and Dr. Chris Harding of The Open Group."
Wednesday, 29 October 2008
Java Native Access
Every Java programmer knows about JNI, but what about JNA?
I'm pretty impressed with this. The documentation seems pretty comprehensive and the examples are well constructed. The software seems very stable. Obviously you can kill the VM if you dereference invalid pointers etc., but after a short period of familiarisation, I found that I could knock together a Java API for a subset of an existing C/C++ DLL (kernel32), using just the published API documentation to guide me, in very little time. I wrote a little test program that gets the compressed and uncompressed size of arbitrary files, and prints proper error messages for files it cannot open.
I make no claim that the example code is particularly elegant, but it shows what you can do. One great advantage is that you don't need to install a separate C/C++ development environment if a DLL already exists that gives you the needed native functionality.
Note that JNI is built for raw speed - some applications use it for example to perform graphics manipulations thousands of times a second. Using JNA, because it uses inspection and all kinds of clever tricks, you need to budget about ten times as much execution time per transition from Java to native machine code and back again. For many applications, that's irrelevant because native code is only called infrequently (up to a few hundred times a second).
I'm pretty impressed with this. The documentation seems pretty comprehensive and the examples are well constructed. The software seems very stable. Obviously you can kill the VM if you dereference invalid pointers etc., but after a short period of familiarisation, I found that I could knock together a Java API for a subset of an existing C/C++ DLL (kernel32), using just the published API documentation to guide me, in very little time. I wrote a little test program that gets the compressed and uncompressed size of arbitrary files, and prints proper error messages for files it cannot open.
I make no claim that the example code is particularly elegant, but it shows what you can do. One great advantage is that you don't need to install a separate C/C++ development environment if a DLL already exists that gives you the needed native functionality.
Note that JNI is built for raw speed - some applications use it for example to perform graphics manipulations thousands of times a second. Using JNA, because it uses inspection and all kinds of clever tricks, you need to budget about ten times as much execution time per transition from Java to native machine code and back again. For many applications, that's irrelevant because native code is only called infrequently (up to a few hundred times a second).
Tuesday, 14 October 2008
Myths and Realities about Virtualisation
I just came across an excellent article by Amrit Williams entitled Myths, Misconceptions, Half-Truths and Lies about Virtualization. Read it especially for the discussion thread that follows it.
New Test-Driven Development training course
I have just completed work on a training course in test-driven development, intended for delivery over two days or self-study, which uses the development of a dating service as a worked example. Along the way, students will learn to use data access objects (DAO), Object-Relational Mapping (ORM) using Hibernate, Spring, jMock and Canoo WebTest. These tools will, I hope, provide them with good insights into the way that test-driven approaches alter the way in which software architectures grow, leading to more robust yet flexible designs.
My thanks to Adam Shimali, who developed the material initially. He and I aim to present a five-hour "taster" of this course on the Sunday afternoon at SPA2009.
My thanks to Adam Shimali, who developed the material initially. He and I aim to present a five-hour "taster" of this course on the Sunday afternoon at SPA2009.
Improving team dynamics
Interested in making your team more effective? I found the following list of useful links just now after reconnecting to a former colleague on Plaxo Pulse.
And of course, my own company Zühlke Engineering offers tools, coaching and training to agile teams.
And of course, my own company Zühlke Engineering offers tools, coaching and training to agile teams.
Tuesday, 5 August 2008
Importing free-format address lists into vCard (.vcf) and Excel
A colleague had the following problem: he had a Word document containing information about many different companies he dealt with. They were more or less in a standardised form, with the name of the company on the first line of each "record" and the address on the second (or on the same line in some cases), but after that it was pretty haphazard - sometimes telephone numbers were prefixed with TEL:, sometimes not, sometimes the URLs were listed beneath a heading of "Websites:", sometimes not, and so on.
His desire was to export this information in a form that it could be treated much more like a proper database - spreadsheet format would be a start. Exporting the Word document to text was easy, but where to go from there? I started toying around with a Perl script to try to clean up the data, and found that there were some convenient modules around for capturing input to a vCard standard format.
The attached script uses the Text::vCard module (which you'll have to install manually from CPAN as it isn't available using PPM) with its Addressbook and Node packages to build up a whole address book from the input file. Although I have made no attempt at elegance, it has a good stab at parsing UK-style and US-style company names and addresses into their constituent fields, provided they are separated by commas. NB there are no contact names, as the original data file didn't have any - this is left as an exercise for the reader!
(Look out for cut-off long lines in the above - copy/paste works far better in Firefox than in Internet Explorer. If you have problems, ask me to e-mail you a copy of the script). The script spits out the resulting vCard file to the standard output. Anything it can't parse is echoed to standard error.
I found a really nifty converter from VCF to CSV at http://labs.brotherli.ch/vcfconvert/ - once you've got a pile of addresses in spreadsheet format, you can do anything with it.
His desire was to export this information in a form that it could be treated much more like a proper database - spreadsheet format would be a start. Exporting the Word document to text was easy, but where to go from there? I started toying around with a Perl script to try to clean up the data, and found that there were some convenient modules around for capturing input to a vCard standard format.
The attached script uses the Text::vCard module (which you'll have to install manually from CPAN as it isn't available using PPM) with its Addressbook and Node packages to build up a whole address book from the input file. Although I have made no attempt at elegance, it has a good stab at parsing UK-style and US-style company names and addresses into their constituent fields, provided they are separated by commas. NB there are no contact names, as the original data file didn't have any - this is left as an exercise for the reader!
#!/usr/bin/perl -w
#
# convert a file to address records in VCF (vCard) format
# reads standard input and writes to standard output
#
if ( $#ARGV + 1 != 0 ) {
print STDERR "usage: parse_addresses <inputfile.txt >outputfile.vcf\n";
exit;
}
use Text::vCard::Addressbook;
my $addressbook = new Text::vCard::Addressbook;
# For testing / debugging: load a pre-existing address book
#my $addressbook =
# Text::vCard::Addressbook->new(
# { 'source_file' => 'C:/temp/Text-vCard-2.03/rfc2426.vcf', } );
while ( !eof STDIN ) {
parseEntry($addressbook);
}
print $addressbook->export();
sub parseEntry {
my ($addressbook) = @_;
my $line = <STDIN>;
chomp($line);
# First line should contain organisation name and optionally its address
my ( $org, $addr );
if ( $line =~ m'^\s*([0-9A-Za-z][^:]*)\:?\s*$' ) {
$org = $1;
chomp( $line = <STDIN> );
$addr = $line;
}
elsif ( $line =~ m'^\s*([0-9A-Za-z][^:]*)\:\s*(\S.*)$' ) {
$org = $1;
$addr = $2;
}
else {
# Not recognised
print STDERR "Not recognised start of entry: $line\n";
while ( ( defined $line ) && ( $line !~ '^\s*$' ) ) {
chomp( $line = <STDIN> );
print STDERR "Discarding: $line\n";
}
return undef;
}
# print STDERR "Organisation: $org -- Address: $addr\n";
my $vCard = $addressbook->add_vcard();
$vCard->version('3.0');
$vCard->add_node({ 'node_type' => 'ORG' })->name($org);
my $adr = $vCard->add_node( { 'node_type' => 'ADR' } );
my @unsorted = ();
$addr =~ s/\W*$//; # Remove trailing spaces and full-stops
foreach my $adrField ( split( ',', $addr ) ) {
if ( $adrField =~ m'^\s*([A-Z][A-Za-z ]+\s)?([-0-9]{4,11})\s*([A-Z][A-Za-z]+)?\s*$' ) {
# US state and zip-code
my $state = $1;
chop( $state ) if defined $state;
my $zip_code = $2;
my $country = $3;
if ( defined $state ) { $adr->region($state); }
$adr->post_code($zip_code);
if ( defined $country ) { $adr->country($country); }
} elsif ( $adrField =~ m'^\s*([A-Z][A-Za-z ]+\s)?([A-Z]{1,2}[0-9O]{1,2}[A-Z]? [0-9O]{1,2}[A-Z]{2})\s*([A-Z][A-Za-z ]+)?\s*$' ) {
# UK county and post-code
my $county = $1;
chop ($county) if defined $county;
my $post_code = $2;
my $country = $3;
if ( defined $county ) { $adr->region($county); }
$post_code =~ s/O([0-9][A-Z]?) ([0-9])/0$1 $2/;
$post_code =~ s/O([A-Z])? ([0-9])/0$1 $2/;
$post_code =~ s/ O([0-9])/ O$1/;
$post_code =~ s/ ([0-9])O([A-Z]{2})/ $1O$2/;
$adr->post_code($post_code);
if ( defined $country ) { $adr->country($country); }
} elsif ( $adrField =~ m'P.*BOX\s*(\d+)'i ) {
# Post Office Box
my $po_box = $1;
$adr->po_box($po_box);
} elsif ( $adrField =~ m'^\s*(\d+\s+[A-Za-z0-9 ]+$)' ) {
# House number and street
my $street = $1;
$adr->street($street);
} elsif ( $adrField =~ m'^\s*([A-Z][A-Za-z0-9 ]+$)' ) {
push @unsorted, $1;
}
}
# Retrieve unsorted items in reverse order
if (!defined $adr->country() && $#unsorted > 2) {
my $country = pop @unsorted;
if (defined $country && $country =~ m'[A-Z][A-Za-z ]+') {
$adr->country( $country );
} else {
push @unsorted, $country;
}
}
if (!defined $adr->region() && $#unsorted > 1) {
my $region = pop @unsorted;
if (defined $region && $region =~ m'[A-Z][A-Za-z ]+') {
$adr->region( $region );
} else {
push @unsorted, $region;
}
}
if (!defined $adr->city()) {
my $city = pop @unsorted;
if (defined $city && $city =~ m'[A-Z][A-Za-z ]+') {
$adr->city( $city );
} else {
push @unsorted, $city;
}
}
if (!defined $adr->street()) {
$adr->street( pop @unsorted );
}
if ($#unsorted >= 0) {
$adr->extended ( join (', ', @unsorted) );
}
chomp( $line = <STDIN> );
while ( ( defined $line ) && ( $line !~ '^\s*$' ) ) {
if ( $line =~ m'^\s*(https?\://\S+)'i ) {
$vCard->url($1);
} elsif ( $line =~ m'(www\.\S+)'i ) {
$vCard->url($1);
} elsif ( $line =~ m'(\S+@\S+)\s*$' ) {
my $email = $1;
my $node = $vCard->add_node( { 'node_type' => 'EMAIL' } );
my @types = qw (work internet);
$node->add_types( \@types );
$node->value($email);
} elsif ( $line =~ m'^\s*(\+?[0-9() ]+)$' ) {
my $tel = $1;
my $node = $vCard->add_node( { 'node_type' => 'TEL' } );
my @types = qw (work voice);
$node->add_types( \@types );
$node->value($tel);
} elsif ( $line =~ m'^\s*tel.*\:\s*(\S.*)$'i ) {
my $tel = $1;
my $node = $vCard->add_node( { 'node_type' => 'TEL' } );
my @types = qw (work voice);
$node->add_types( \@types );
$node->value($tel);
} elsif ( $line =~ m'^\s*fax.*\:\s*(\S.*)$'i ) {
my $tel = $1;
my $node = $vCard->add_node( { 'node_type' => 'TEL' } );
my @types = qw (work fax);
$node->add_types( \@types );
$node->value($tel);
} elsif ( $line =~ m'^[^:]*name[^:]*\:\s*(\S.*)$'i ) {
$vCard->fn($1);
} elsif ( $line =~ m'^\s*([^:]+)\:\s*$' ) {
my $noteHead = $1;
chomp( $line = <STDIN> );
if ( $line =~ m'^\s*(https?\://\S+)'i ) {
$vCard->url($1);
} elsif ( $line =~ m'(www\.\S+)'i ) {
$vCard->url($1);
} elsif ( $line =~ m'(\S+@\S+)\s*$' ) {
my $email = $1;
my $node = $vCard->add_node( { 'node_type' => 'EMAIL' } );
my @types = qw (work internet);
$node->add_types( \@types );
$node->value($email);
} elsif ( $line =~ m'^\s*(\+?[0-9() ]+)$' ) {
my $tel = $1;
my $node = $vCard->add_node( { 'node_type' => 'TEL' } );
my @types = qw (work voice);
$node->add_types( \@types );
$node->value($tel);
} else {
$vCard->note("$noteHead: $line");
}
} elsif ( $line =~ m'^\s*([^:]+)\:\s*(\S.*)$' ) {
my $noteHead = $1;
my $note = $2;
$vCard->note("$noteHead: $note");
} else {
print STDERR "Cannot understand entry detail: $line\n";
}
chomp( $line = <STDIN> );
}
}
1; # End.
(Look out for cut-off long lines in the above - copy/paste works far better in Firefox than in Internet Explorer. If you have problems, ask me to e-mail you a copy of the script). The script spits out the resulting vCard file to the standard output. Anything it can't parse is echoed to standard error.
I found a really nifty converter from VCF to CSV at http://labs.brotherli.ch/vcfconvert/ - once you've got a pile of addresses in spreadsheet format, you can do anything with it.
Thursday, 24 July 2008
Dayfindr - a suitable case for viral marketing
I frequently recommend my colleague Ben Nortier's demonstration application Dayfindr. This is a simple calendar co-ordinator, allowing an arbitrarily large group of people to find a date or dates when everyone will be available to undertake some kind of joint activity, such as a meeting, theatre visit etc. Although it doesn't deal with subtleties like specific times of day, the comment facility is there to let you fine-tune your entries.
Briefly, Ben put this together as an exercise in Erlang / OTP programming and it's getting an ever-increasing amount of use simply through word-of-mouth recommendation. So far it has easily coped with everything that's been thrown at it. Every time I show it to someone, it knocks them sideways with its straightforward simplicity, robustness and speed, even though it is hosted on a virtual server with very frugal memory and processor allocations. Give it a try and see what you think!
Briefly, Ben put this together as an exercise in Erlang / OTP programming and it's getting an ever-increasing amount of use simply through word-of-mouth recommendation. So far it has easily coped with everything that's been thrown at it. Every time I show it to someone, it knocks them sideways with its straightforward simplicity, robustness and speed, even though it is hosted on a virtual server with very frugal memory and processor allocations. Give it a try and see what you think!
Wednesday, 16 July 2008
Software Practice - The State of the Art
The British Computer Society's specialist group for Software Practice Advancement is staging a series of talks entitled The State of the Art. This features five talks from six of the leading experts in their respective fields, ranging from architecture to team dynamics.
It's excellent value at just £90 plus VAT - discounted to just £75 plus VAT for members of the specialist group or of the BCS generally. Numbers are limited, so book soon!
It's excellent value at just £90 plus VAT - discounted to just £75 plus VAT for members of the specialist group or of the BCS generally. Numbers are limited, so book soon!
Tuesday, 8 July 2008
Maven multi-module builds and Eclipse PDE
I have not blogged about Maven and Eclipse for a while, but I have learned a couple of important new lessons recently.
1. Even though OSGI gives you the capability of supporting multiple versions of the same bundle simultaneously, the same isn't true of Eclipse features that contain those bundles (a.k.a. Eclipse plug-ins). So I discovered that when feature A depends on feature B and C, where B depends on feature D version 1.1.0 and C depends on feature D version 1.1.2, you will get two versions of feature D in your Eclipse target platform, resulting in a cryptic error message from the PDE build of the form "unable to find feature: D". What it means is that B wants version 1.1.0 of D, but this has been masked by the presence of version 1.1.2. Our likely approach to this will be to avoid putting version dependencies (at least, versions of included features) into feature.xml files at all. There is no need, after all: the precise versions of everything needed should be specified in the accompanying POM, which assembles the RCP target platform for the build.
2. If you're using multi-module builds, there is no point in using goals such as "test" or "package". Unless you at least install the artifact created during the build of each module into your local Maven repository, the next module along will still get the previous version as a dependency, and the build will fail.
3. If you're using multi-module builds, don't run the install or deploy goal in the same invocation as the site or site-deploy goal. Maven tries to generate the site documentation for the top level before it has built the subordinate levels, so the dependencies required to create the reports are not present and the build will fail. We now routinely run "mvn clean install" or "mvn clean deploy" followed by "mvn site" or "mvn site-deploy" respectively.
1. Even though OSGI gives you the capability of supporting multiple versions of the same bundle simultaneously, the same isn't true of Eclipse features that contain those bundles (a.k.a. Eclipse plug-ins). So I discovered that when feature A depends on feature B and C, where B depends on feature D version 1.1.0 and C depends on feature D version 1.1.2, you will get two versions of feature D in your Eclipse target platform, resulting in a cryptic error message from the PDE build of the form "unable to find feature: D". What it means is that B wants version 1.1.0 of D, but this has been masked by the presence of version 1.1.2. Our likely approach to this will be to avoid putting version dependencies (at least, versions of included features) into feature.xml files at all. There is no need, after all: the precise versions of everything needed should be specified in the accompanying POM, which assembles the RCP target platform for the build.
2. If you're using multi-module builds, there is no point in using goals such as "test" or "package". Unless you at least install the artifact created during the build of each module into your local Maven repository, the next module along will still get the previous version as a dependency, and the build will fail.
3. If you're using multi-module builds, don't run the install or deploy goal in the same invocation as the site or site-deploy goal. Maven tries to generate the site documentation for the top level before it has built the subordinate levels, so the dependencies required to create the reports are not present and the build will fail. We now routinely run "mvn clean install" or "mvn clean deploy" followed by "mvn site" or "mvn site-deploy" respectively.
Monday, 12 May 2008
Government security experts express concerns about ID Card scheme
In today’s Observer, it is revealed that the Government’s own security experts are worried about Labour’s plans to introduce mandatory ID cards. Here are some choice quotes:
"In a potentially damaging revelation, which undermines claims that the scheme will enhance national security, the group has concluded that [ID cards] will be prone to corruption...
"The Isap report goes on to warn that the scheme may not be embraced by government departments, suggesting the cards are not being well received in some Whitehall departments.
"The panel also warns the initiative is struggling to fulfil its remit. It states that the scheme lacks a ‘robust and transparent operational data governance regime and clear data architecture’, suggesting there is confusion over its roll-out."
"In a potentially damaging revelation, which undermines claims that the scheme will enhance national security, the group has concluded that [ID cards] will be prone to corruption...
"The Isap report goes on to warn that the scheme may not be embraced by government departments, suggesting the cards are not being well received in some Whitehall departments.
"The panel also warns the initiative is struggling to fulfil its remit. It states that the scheme lacks a ‘robust and transparent operational data governance regime and clear data architecture’, suggesting there is confusion over its roll-out."
Thursday, 24 April 2008
Eclipse and Maven - a quick update
I have not posted anything about the progress of my researches into Eclipse and Maven integration for a while. That's because for a long time, we didn't get significantly further along in our research. But just recently we've started making significant progress again.
A very important lesson was that the pde-maven-plugin is really not worth persevering with - at least in our situation. We decided that we wanted to build every Eclipse plug-in and every Eclipse feature using its own POM, so that we could for example generate code coverage reports for each component individually and also configuration-manage them at that level of granularity. It turned out to be not all that difficult to invoke the PDE headless build for these circumstances from Maven using about two dozen lines of Ant script.
After nine months of Eclipse 3.3, the pde-maven-plugin has still not been officially updated to handle the changes from Eclipse 3.2 (we had to use the Alpha 2 snapshot version downloaded from Codehaus) and so we're concerned that we might have similar upward compatibility problems when Eclipse 3.4 comes out in a few months.
When it's all been finalised, I hope to publish a longer explanation of how our build system works here. Don't hold your breath though; at the moment we still have to solve a few issues - for example, we need to figure out how to produce the source plug-ins alongside the executable ones.
A very important lesson was that the pde-maven-plugin is really not worth persevering with - at least in our situation. We decided that we wanted to build every Eclipse plug-in and every Eclipse feature using its own POM, so that we could for example generate code coverage reports for each component individually and also configuration-manage them at that level of granularity. It turned out to be not all that difficult to invoke the PDE headless build for these circumstances from Maven using about two dozen lines of Ant script.
After nine months of Eclipse 3.3, the pde-maven-plugin has still not been officially updated to handle the changes from Eclipse 3.2 (we had to use the Alpha 2 snapshot version downloaded from Codehaus) and so we're concerned that we might have similar upward compatibility problems when Eclipse 3.4 comes out in a few months.
When it's all been finalised, I hope to publish a longer explanation of how our build system works here. Don't hold your breath though; at the moment we still have to solve a few issues - for example, we need to figure out how to produce the source plug-ins alongside the executable ones.
Thursday, 3 April 2008
Having problems with string matching / replacing in Java?
I find it almost impossible to write Java regular expressions correctly first time. They are pretty hard to get your head around (unless your head is a peculiar shape :-)
A trial-and-error approach is much eased by FileFormat's Regular Expression Test Page. At the bottom of the page you'll find a number of links to useful references and tutorials, which may help you eliminate some of the error from this approach.
The site is worth bookmarking anyway for its range of format conversion tools and information about file extensions.
A trial-and-error approach is much eased by FileFormat's Regular Expression Test Page. At the bottom of the page you'll find a number of links to useful references and tutorials, which may help you eliminate some of the error from this approach.
The site is worth bookmarking anyway for its range of format conversion tools and information about file extensions.
Tuesday, 1 April 2008
Taking the pain out of Eclipse RCP builds and exports
It is well worth reading the manual, as the old saying goes - but sometimes the manual can be misleading, especially if the software being documented has evolved since it was written.
The dependencies, all of which you can copy from the plugins folder of your Eclipse installation to your target platform, are as follows:
To support the most common types of UI extension, such as menu entries, copy the source plug-ins for org.eclipse.platform and org.eclipse.rcp from your Eclipse plugins folder into the target platform (hint: they are folders!). It is not even necessary to select them when specifying the target platform in Eclipse Preferences, but does no harm.
For the first category, I recommend setting up your source folder structure to match the output in the build directory.
Ignore the features supplied with the Eclipse SDK, RCP Runtime and Delta Pack. Instead, define your own features to include just the plug-ins on which your application depends. It helps to factorise this into a half-dozen features that collect Eclipse plug-ins for various capabilities - e.g. the feature com.example.myproject.help.feature could be used to list all the plug-ins shown above that are needed to provide application help. Then if you want to include help with an application, all it has to do is list this feature in its application feature.xml as an included feature (of course you still have to write the help texts and create a help plug-in to contain them!).
Eclipse Help
If you're using Eclipse 3.3, the advice in the 2006 edition of The Book about enabling Help in your Eclipse RCP applications is no longer quite accurate. Section 13.2 ("Getting the Help Plug-ins") tells you to install lots of things, while my experimentation has shown that all you need are the following plug-ins in your target platform and any launch configuration, plus their dependencies:- org.eclipse.help.ui
- org.eclipse.help.webapp
The dependencies, all of which you can copy from the plugins folder of your Eclipse installation to your target platform, are as follows:
- org.eclipse.help.base
- org.apache.jasper
- org.apache.lucene
- org.apache.lucene.analysis
- org.eclipse.equinox.http.jetty
- org.eclipse.equinox.http.registry
- org.eclipse.equinox.http.servlet
- org.eclipse.equinox.jsp.jasper
- org.eclipse.equinox.jsp.jasper.registry
- org.eclipse.osgi.services
- javax.servlet
- javax.servlet.jsp
- org.mortbay.jetty
- org.apache.commons.logging
- org.apache.commons.el
Editing and Debugging
For some things, Eclipse needs source plug-ins to be present in the target platform location, not just in the installation location.I have not yet established all the circumstances under which this may be the case, but I have found that if you are trying to use the special-purpose editor to work on a plugin.xml file, unless the source plug-ins are present in the target platform location, when you right-click on an extension point in the Extensions tab you won't see the correct extension types in the context menu (just "Generic").To support the most common types of UI extension, such as menu entries, copy the source plug-ins for org.eclipse.platform and org.eclipse.rcp from your Eclipse plugins folder into the target platform (hint: they are folders!). It is not even necessary to select them when specifying the target platform in Eclipse Preferences, but does no harm.
Source organisation
It turns out that the line of least resistance is to separate your plug-ins and features into two groups:- All the stuff you have access to in source form directly from its repository
- Everything else - basically the Eclipse and third-party stuff
For the first category, I recommend setting up your source folder structure to match the output in the build directory.
- product root
- features
- a.b.c.d.feature
- etc.
- plugins
- a.b.c.d
- a.b.c.d.product
- a.b.c.d.update-site
- etc.
Ignore the features supplied with the Eclipse SDK, RCP Runtime and Delta Pack. Instead, define your own features to include just the plug-ins on which your application depends. It helps to factorise this into a half-dozen features that collect Eclipse plug-ins for various capabilities - e.g. the feature com.example.myproject.help.feature could be used to list all the plug-ins shown above that are needed to provide application help. Then if you want to include help with an application, all it has to do is list this feature in its application feature.xml as an included feature (of course you still have to write the help texts and create a help plug-in to contain them!).
Version numbering
In product.xml and feature.xml files, it is a good idea to set the version numbers of included plug-ins and features to 0.0.0 (or a range, where applicable) to ensure that the latest version is always used during the build - the exported feature.xml files will end up containing the resolved version numbers.Saturday, 15 March 2008
BCS survey underlines laxity of Government data protection
As if more evidence were needed for the undesirability of a Government-run national identity register, Channel 4 News reported last night that a British Computer Society survey had demonstrated a total absence of data accuracy audits or data correction budgets in 14 out of 14 UK Government departments.
Monday, 10 March 2008
Other models for identity registration
A story appeared on Slashdot today comparing the UK and US identity database schemes with the Jukinet system that's been quietly running in Japan since 1992. Some of the responses are quite insightful and informative. People describe systems that exist in Bulgaria, Japan, Sweden, Norway and other places, each of which have some things to recommend them.
For me, I think the main lesson is that yes, my personal data is already held in many different databases, both of government agencies such as the tax authorities and of private companies such as my credit card issuer. However, data protection legislation exists explicitly to prevent anyone, government or private, misusing this data by combining disparate databases to build a "profile" of me as an individual and to use that to my advantage or disadvantage. I fundamentally object to paying a huge amount of money so that this government can ride roughshod over those rights of the citizen. It isn't so much a question of privacy, more of protecting the individual against the might of the state.
I quite like the idea of an identity service as something that people can subscribe to if they wish - much like the trust providers used in a Public Key Infrastructure (PKI). There should be a free market in personal authentication, just as there is on the Internet. This would drive down prices and encourage the development of value-added services.
For me, I think the main lesson is that yes, my personal data is already held in many different databases, both of government agencies such as the tax authorities and of private companies such as my credit card issuer. However, data protection legislation exists explicitly to prevent anyone, government or private, misusing this data by combining disparate databases to build a "profile" of me as an individual and to use that to my advantage or disadvantage. I fundamentally object to paying a huge amount of money so that this government can ride roughshod over those rights of the citizen. It isn't so much a question of privacy, more of protecting the individual against the might of the state.
I quite like the idea of an identity service as something that people can subscribe to if they wish - much like the trust providers used in a Public Key Infrastructure (PKI). There should be a free market in personal authentication, just as there is on the Internet. This would drive down prices and encourage the development of value-added services.
Friday, 7 March 2008
Latest ID Register scam
In a desperate bid to ram through its controversial plans for a national identity register and the associated ID cards, the British government has announced yet another defenceless section of society to be targeted: students.
Not content with making university students pay over the odds for what are frequently sub-standard educational opportunities, the government now plans to "encourage" them to supply personal details including biometric data "voluntarily" in order to "help" them access educational services. Home Secretary Jacqui Smith claims that "young people who register for an ID card will find it easier to enrol on a course, apply for a student loan or open a bank account". The implication is that if they do not agree to register, they will find it hard or impossible to enrol on a course, get a student loan or open a bank account.
If that doesn't amount to coercion, I don't know what does. As a parent of two students currently at university, I feel very strongly about this. This is very reminiscent of the kind of vindictive tactics employed by the morally bankrupt East German government in the communist era. Moreover, the government expects to add me (an EU national resident and working in the UK) to its database automatically in a few years time "unless I opt out", which I certainly intend to do.
A series of high-profile security breaches recently proved that governments generally, and this one in particular, cannot be trusted to handle personal information securely or to refrain from using it as a means of coercion. The sausage-slicing approach it has adopted for introducing the scheme is evidence that it knows it faces a massive revolt if it were to apply the same rules to the whole population in one go. By bringing in the scheme in this sneaky, insinuating way, the government makes me more convinced than ever that its motivation for the identity register is anything but the publicly stated one of making us "confident that other people are who they say they are" - security experts have already shown that an ID card of the current design will be relatively easy to fake, so that pretext doesn't persuade me at all.
I don't normally subscribe to conspiracy theories but what else can you believe in the face of this government's announcements and actions?
Not content with making university students pay over the odds for what are frequently sub-standard educational opportunities, the government now plans to "encourage" them to supply personal details including biometric data "voluntarily" in order to "help" them access educational services. Home Secretary Jacqui Smith claims that "young people who register for an ID card will find it easier to enrol on a course, apply for a student loan or open a bank account". The implication is that if they do not agree to register, they will find it hard or impossible to enrol on a course, get a student loan or open a bank account.
If that doesn't amount to coercion, I don't know what does. As a parent of two students currently at university, I feel very strongly about this. This is very reminiscent of the kind of vindictive tactics employed by the morally bankrupt East German government in the communist era. Moreover, the government expects to add me (an EU national resident and working in the UK) to its database automatically in a few years time "unless I opt out", which I certainly intend to do.
A series of high-profile security breaches recently proved that governments generally, and this one in particular, cannot be trusted to handle personal information securely or to refrain from using it as a means of coercion. The sausage-slicing approach it has adopted for introducing the scheme is evidence that it knows it faces a massive revolt if it were to apply the same rules to the whole population in one go. By bringing in the scheme in this sneaky, insinuating way, the government makes me more convinced than ever that its motivation for the identity register is anything but the publicly stated one of making us "confident that other people are who they say they are" - security experts have already shown that an ID card of the current design will be relatively easy to fake, so that pretext doesn't persuade me at all.
I don't normally subscribe to conspiracy theories but what else can you believe in the face of this government's announcements and actions?
Thursday, 6 March 2008
More on the Maven POM for PDE headless builds
It turns out that you can override almost any property in the build.properties file using an equivalent plugin configuration item within the buildProperties section.
Moreover, by observing certain conventions the job is made much easier. One of these is that the PDE build takes place within a "build directory" that contains subdirectories named "features" and "plugins", and that the product's project directory (where pom.xml resides) is one of the subdirectories of "plugins". Therefore, in the Maven POM, it would be logical to define buildDirectory as "../..". Here's a possible approach:
Currently I am working on a Maven plugin that will assemble the Eclipse PDE target platform automatically from project dependencies. I'll document this here once it's working.
Moreover, by observing certain conventions the job is made much easier. One of these is that the PDE build takes place within a "build directory" that contains subdirectories named "features" and "plugins", and that the product's project directory (where pom.xml resides) is one of the subdirectories of "plugins". Therefore, in the Maven POM, it would be logical to define buildDirectory as "../..". Here's a possible approach:
<build>Please note: we use a convention that the Maven installation is alongside the Eclipse IDE installation, hence the definition of eclipseInstall above. We also define the maven.work.dir property in the settings.xml file - this is where all the stuff goes that Maven needs, e.g. the local repository and the Eclipse PDE target platform.
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>pde-maven-plugin</artifactId>
<version>1.0-alpha-2-SNAPSHOT</version>
<extensions>true</extensions>
<!-- Custom lifecycle configuration -->
<configuration>
<eclipseInstall>${env.M2_HOME}/../eclipse</eclipseInstall>
<pdeProductFilename>prototyp.product</pdeProductFilename>
<pdeBuildVersion>3.3.2.R331_v20071019</pdeBuildVersion>
<buildProperties>
<base>${maven.work.dir}</base>
<baseLocation>${maven.work.dir}/eclipse</baseLocation>
<buildDirectory>${basedir}/../..</buildDirectory>
</buildProperties>
</configuration>
<!-- Also bind to mvn clean -->
<executions>
<execution>
<id>clean-pde</id>
<phase>clean</phase>
<goals>
<goal>clean</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Currently I am working on a Maven plugin that will assemble the Eclipse PDE target platform automatically from project dependencies. I'll document this here once it's working.
Sunday, 2 March 2008
Business Process Transaction Monitoring (BPTM)
I went along to a meeting of the BCS Kingston & Croydon branch last Tuesday, at which a group of people from BT's Design group, who specialise in Systems and Application Monitoring and Management tools, revealed some astonishing achievements in a very low-key way, as if they had no idea how important they were.
These people have distilled their dozens of years of experience of managing increasingly complex distributed systems with few staff and fewer tools into a powerful yet spare vocabulary (or ontology, to use a fancy term) that efficiently describes the universe of discourse. It includes such concepts as server, virtual machine, date, time, business process, transaction, event-type and (very important) end-to-end correlation key, which precisely locates a reported event in a specific application component. All this, logically enough, is aligned with the ITIL standard for service delivery.
But not only that, they've defined binary, textual and graphical representations of log entries or event notifications that capture all this information, and a set of libraries that implement all of this and are accessed via a very simple API, which has been implemented by a standard code library (I understand that a Java implementation is available, but there may be support for other languages too). Not least, there is a defined process for integrating an application into the service monitoring and management framework.
Most applications already generate copious log information, and most commercial monitoring tools work by scanning the log files for interesting events. You have to configure patterns that allow the monitoring software to recognise different events. As a result, all large-scale monitoring infrastructures are permanently out of date with respect to the log formats and events generated by the applications, which are continually evolving. Moreover, the sheer volumes of log information generated mean that monitoring products that take this approach tend to be overwhelmed by the deluge of data and can find it difficult to react in a timely manner to real problem situations when they arise.
BT's BPTM takes a different approach: the application is "instrumented" by wrapping existing calls to the system logging facility, when it's much easier to identify the meaning of the logged information in terms of the underlying data model and to add any missing properties (such as system identifier, timestamp and e2e correlation key). As a result, team boss Ian Johnston claims that an average application can be instrumented in one day (preceded by a one-day workshop to identify the requirements of managing that application, and followed by another day to roll out and test the instrumented version of the code).
The BPTM library takes a "reporting by exception" approach to cut down on the amount of communication required. For example, events that are expected and that duly occur are merely logged locally by the application. This measure alone reduces the management data traffic by a factor of 20:1 on average. Then there are event correlation rules that can recognise typical failure scenarios and offer scripted diagnostic and remediation advice to support staff, many of whom are offshore.
By using this combination of approaches, the design group has already equipped over 80 separate applications in the "BT Matrix" or Service Oriented Architecture to be centrally monitored and managed. Newly instrumented applications are auto-discovered by the BPTM infrastructure - they simply hook themselves into the reporting network and pop up on the monitoring console (which is of course a rich Internet application).
Operators are alerted to emergency situations, such as service bottlenecks, via a variety of mechanisms. The primary user interface is a mimic diagram, which shows the flow of messages that make up an end-to-end business transaction through a series of components. The user can drill in to see both more detail and historical trend information, so that e.g. new server capacity can be brought on-stream before a bottleneck becomes critical.
It's obviously in BT's interest to publicise the BPTM standard so that more suppliers will start using it and building it into their products from the outset. But I don't think that Ian and his team are going about this in the right way yet. To build up momentum, it is not enough to hold occasional talks to BCS branches, where you reach at most 20 interested individuals at a time. You need to convince the solution architects and other decision makers that this is the right way to go. The first thing to do is to publish the standard, and simultaneously or not long afterwards, make the libraries that implement it Open Source. This should create a community of interest across the industry. After all, large service-oriented architectures are becoming increasingly common, in all market sectors, not just in telecoms, so the management headache is shared by all projects. Then some judiciously targeted white papers and articles should appear in the appropriate journals, and the trade press needs to be made aware.
If publicised in the right way, I can't see how this technology can fail to make waves.
These people have distilled their dozens of years of experience of managing increasingly complex distributed systems with few staff and fewer tools into a powerful yet spare vocabulary (or ontology, to use a fancy term) that efficiently describes the universe of discourse. It includes such concepts as server, virtual machine, date, time, business process, transaction, event-type and (very important) end-to-end correlation key, which precisely locates a reported event in a specific application component. All this, logically enough, is aligned with the ITIL standard for service delivery.
But not only that, they've defined binary, textual and graphical representations of log entries or event notifications that capture all this information, and a set of libraries that implement all of this and are accessed via a very simple API, which has been implemented by a standard code library (I understand that a Java implementation is available, but there may be support for other languages too). Not least, there is a defined process for integrating an application into the service monitoring and management framework.
Most applications already generate copious log information, and most commercial monitoring tools work by scanning the log files for interesting events. You have to configure patterns that allow the monitoring software to recognise different events. As a result, all large-scale monitoring infrastructures are permanently out of date with respect to the log formats and events generated by the applications, which are continually evolving. Moreover, the sheer volumes of log information generated mean that monitoring products that take this approach tend to be overwhelmed by the deluge of data and can find it difficult to react in a timely manner to real problem situations when they arise.
BT's BPTM takes a different approach: the application is "instrumented" by wrapping existing calls to the system logging facility, when it's much easier to identify the meaning of the logged information in terms of the underlying data model and to add any missing properties (such as system identifier, timestamp and e2e correlation key). As a result, team boss Ian Johnston claims that an average application can be instrumented in one day (preceded by a one-day workshop to identify the requirements of managing that application, and followed by another day to roll out and test the instrumented version of the code).
The BPTM library takes a "reporting by exception" approach to cut down on the amount of communication required. For example, events that are expected and that duly occur are merely logged locally by the application. This measure alone reduces the management data traffic by a factor of 20:1 on average. Then there are event correlation rules that can recognise typical failure scenarios and offer scripted diagnostic and remediation advice to support staff, many of whom are offshore.
By using this combination of approaches, the design group has already equipped over 80 separate applications in the "BT Matrix" or Service Oriented Architecture to be centrally monitored and managed. Newly instrumented applications are auto-discovered by the BPTM infrastructure - they simply hook themselves into the reporting network and pop up on the monitoring console (which is of course a rich Internet application).
Operators are alerted to emergency situations, such as service bottlenecks, via a variety of mechanisms. The primary user interface is a mimic diagram, which shows the flow of messages that make up an end-to-end business transaction through a series of components. The user can drill in to see both more detail and historical trend information, so that e.g. new server capacity can be brought on-stream before a bottleneck becomes critical.
It's obviously in BT's interest to publicise the BPTM standard so that more suppliers will start using it and building it into their products from the outset. But I don't think that Ian and his team are going about this in the right way yet. To build up momentum, it is not enough to hold occasional talks to BCS branches, where you reach at most 20 interested individuals at a time. You need to convince the solution architects and other decision makers that this is the right way to go. The first thing to do is to publish the standard, and simultaneously or not long afterwards, make the libraries that implement it Open Source. This should create a community of interest across the industry. After all, large service-oriented architectures are becoming increasingly common, in all market sectors, not just in telecoms, so the management headache is shared by all projects. Then some judiciously targeted white papers and articles should appear in the appropriate journals, and the trade press needs to be made aware.
If publicised in the right way, I can't see how this technology can fail to make waves.
A British company providing first-class products
I've been very happy with my DNUK Linux server, which I purchased in September 2001 to provide filing services, web hosting etc. to the whole family. It has been doing duty in the cupboard under the stairs day in, day out without complaint. With the exception of some disk errors that cropped up soon after I bought the machine, and which DNUK sorted out for me very satisfactorily very quickly under the warranty, yesterday was the first time ever it went wrong.
The power supply failed, basically. I managed to find a repair man locally who was able to fit a replacement in about 30 minutes. The old one had some kind of loose bits rattling around in it - probably glass from a blown fuse. Now that a new PSU has been fitted, it's back to work as usual. I fully expect to get another four years or more of use out of it.
The repair man was very complimentary about the build quality of the machine and the tank-like solidity of the chassis. I too was impressed, but then I am not really a judge of these things. All I remember was the pleasure of dealing with a company that was small enough to treat its customers like real people, yet had all the snazzy web-based product selection and customisation capabilities you would expect of a major supplier. And it was excellent value for money, too.
The power supply failed, basically. I managed to find a repair man locally who was able to fit a replacement in about 30 minutes. The old one had some kind of loose bits rattling around in it - probably glass from a blown fuse. Now that a new PSU has been fitted, it's back to work as usual. I fully expect to get another four years or more of use out of it.
The repair man was very complimentary about the build quality of the machine and the tank-like solidity of the chassis. I too was impressed, but then I am not really a judge of these things. All I remember was the pleasure of dealing with a company that was small enough to treat its customers like real people, yet had all the snazzy web-based product selection and customisation capabilities you would expect of a major supplier. And it was excellent value for money, too.
Thursday, 28 February 2008
What is "droplifting"?
Read the highly entertaining story of how a late friend's unpublished novel got to be sold in a high-street chain of bookstores.
Wednesday, 27 February 2008
London Mayoral Candidates and the National Identity Register
In late January or early February, I canvassed the candidates of major parties for the May 2008 election of the London Mayor, asking them to come out publicly with a statement that they would not co-operate with any national identity register (ID Cards) scheme if elected. I wrote:
I would like to know where the [your party here] candidate in the 2008 London Mayoral election stands regarding the national identity register. Will (s)he resist pressure to withhold services from people who do not have an identity card? The government intends to make the "voluntary" scheme effectively compulsory by making it impossible to get a new passport, driving licence etc. without an ID card. If London refuses to play the game, it will be doing its citizens a great service.
The Green party candidate, Siân Berry, responded with alacrity, saying that she had already signed the No2ID pledge in 2004 and was well known for her stand against the scheme (see http://camden.greenparty.org.uk/news/newsidcards.html and other sites). She goes on:
"I would not allow any services for Londoners to depend on the scheme, and would urge Londoners to resist the ID system individually as well.
You may also be interested in our Census Alert campaign to stop Lockheed Martin being given the contract to run the UK Census in 2011. This is a campaign I set up last year after finding out they were on the final shortlist, and there are now politicians from a wide range of parties supporting the cause. See www.censusalert.org.uk for more details."
So that's a result then.
The Liberal Democrat party's candidate, Brian Paddick, sent me a personal letter by snail mail, which arrived today, in which he states that he is totally opposed to the ID cards. From a former police officer, this comes as a nice surprise (except to those who have been following his campaign closely) and is in line with the national party's line as represented by Nick Clegg (see http://www.youtube.com/user/LibDem). Nick Clegg has publicly pledged to refuse to be registered, even if it means being taken to court.
Sadly, Brian Paddick did not go so far as to pledge that London would refuse to participate in any coercion by the Government to force people to join the scheme.
I didn't get any response from either the Conservative or (New?) Labour candidates or their representatives.
However, Conservative candidate Boris Johnson is on record as opposing the scheme - see http://www.youtube.com/watch?v=kZAAzSzleWk.
Only Labour candidate Ken Livingstone is really letting the side down. According to an Andrew Marr interview mentioned on the NO2ID site, he actually supports the ID card scheme - see http://boardreader.com/t/Articles_Publications_150279/Andrew_Marr_Show_Ken_Livingstone_Intervi_19360.html.
I hope this helps you when it comes to making the decision in May!
I would like to know where the [your party here] candidate in the 2008 London Mayoral election stands regarding the national identity register. Will (s)he resist pressure to withhold services from people who do not have an identity card? The government intends to make the "voluntary" scheme effectively compulsory by making it impossible to get a new passport, driving licence etc. without an ID card. If London refuses to play the game, it will be doing its citizens a great service.
The Green party candidate, Siân Berry, responded with alacrity, saying that she had already signed the No2ID pledge in 2004 and was well known for her stand against the scheme (see http://camden.greenparty.org.uk/news/newsidcards.html and other sites). She goes on:
"I would not allow any services for Londoners to depend on the scheme, and would urge Londoners to resist the ID system individually as well.
You may also be interested in our Census Alert campaign to stop Lockheed Martin being given the contract to run the UK Census in 2011. This is a campaign I set up last year after finding out they were on the final shortlist, and there are now politicians from a wide range of parties supporting the cause. See www.censusalert.org.uk for more details."
So that's a result then.
The Liberal Democrat party's candidate, Brian Paddick, sent me a personal letter by snail mail, which arrived today, in which he states that he is totally opposed to the ID cards. From a former police officer, this comes as a nice surprise (except to those who have been following his campaign closely) and is in line with the national party's line as represented by Nick Clegg (see http://www.youtube.com/user/LibDem). Nick Clegg has publicly pledged to refuse to be registered, even if it means being taken to court.
Sadly, Brian Paddick did not go so far as to pledge that London would refuse to participate in any coercion by the Government to force people to join the scheme.
I didn't get any response from either the Conservative or (New?) Labour candidates or their representatives.
However, Conservative candidate Boris Johnson is on record as opposing the scheme - see http://www.youtube.com/watch?v=kZAAzSzleWk.
Only Labour candidate Ken Livingstone is really letting the side down. According to an Andrew Marr interview mentioned on the NO2ID site, he actually supports the ID card scheme - see http://boardreader.com/t/Articles_Publications_150279/Andrew_Marr_Show_Ken_Livingstone_Intervi_19360.html.
I hope this helps you when it comes to making the decision in May!
Friday, 22 February 2008
Further background information about Maven PDE builds
I was asked in a comment to publish the settings.xml file for my Maven installation, because the correspondent was not able to build the example project successfully.
Our project makes use of a local Maven proxy server to mirror the combined contents of the Maven Central repository, various snapshot repositories and the project's own built artifacts. In the Maven conf directory, we have therefore installed the following settings.xml file, which reflects this arrangement:
In the .m2 directory under the user's home directory, we install a minimal settings.xml, which simply indicates where to find the local Maven repo on your hard disk. However, I have customised my copy so that I can do builds when not connected to the company network but with access to the Internet. This shows you where you can obtain things like the PDE Maven plugin snapshot.
As I mentioned in passing in my original post, you also have to make sure that you (a) correct the version number of two dependencies in the pde-maven-plugin's POM, and (b) upload all the Eclipse plugins that the pde-maven-plugin needs to your local repository or to your project repository, using the eclipse:to-maven goal.
I have actually set up a Maven project to do upload Eclipse plugins repeatably and reliably in case we need to clean up and rebuild the project repository (e.g. after an upgrade of Eclipse). Here's the POM:
Hope this helps. If you still have problems, please e-mail me your Maven output - I may be able to guess where it is going wrong.
Our project makes use of a local Maven proxy server to mirror the combined contents of the Maven Central repository, various snapshot repositories and the project's own built artifacts. In the Maven conf directory, we have therefore installed the following settings.xml file, which reflects this arrangement:
<settings>
<servers>
<server>
<id>project-repo</id>
<username>...</username>
<password>...</password>
</server>
</servers>
<profiles>
<profile>
<id>main-profile</id>
<properties>
<!-- settings.localRepository is set by the USER settings.xml -->
<!-- Maven working directory -->
<maven.work.dir>${settings.localRepository}/..</maven.work.dir>
<!-- Clover working directory: clover license is expected in ${clover.work.dir}/license -->
<clover.work.dir>${maven.work.dir}/clover-work</clover.work.dir>
<!-- Additional plugin provider path for Eclipse target platform -->
<plugin.target.dir>${maven.work.dir}/plugin_target_dir</plugin.target.dir>
<!-- project sites will be deployed to ${site.dir} with mvn site:deploy -->
<site.dir>${maven.work.dir}/maven_site</site.dir>
</properties>
<repositories>
<repository>
<id>central</id>
<name>Internal Mirror of Central Repositories</name>
<url>http://....:9999/repository/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<name>Internal Mirror of Central Plugins Repository</name>
<url>http://....:9999/repository/</url>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>main-profile</activeProfile>
</activeProfiles>
</settings>
In the .m2 directory under the user's home directory, we install a minimal settings.xml, which simply indicates where to find the local Maven repo on your hard disk. However, I have customised my copy so that I can do builds when not connected to the company network but with access to the Internet. This shows you where you can obtain things like the PDE Maven plugin snapshot.
<settings>
<localRepository>C:\data\...\maven_repo</localRepository>
<profiles>
<profile>
<id>publicRepos</id>
<activation>
<property>
<name>searchPublicRepos</name>
<value>true</value>
</property>
</activation>
<repositories>
<repository>
<id>codehausSnapshots</id>
<name>Codehaus Snapshots</name>
<releases>
<enabled>false</enabled>
<updatePolicy>always</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>fail</checksumPolicy>
</snapshots>
<url>http://snapshots.maven.codehaus.org/maven2</url>
<layout>default</layout>
</repository>
<repository>
<id>mavenCentral</id>
<name>Maven Central</name>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</snapshots>
<url>http://repo1.maven.org/maven2/</url>
<layout>default</layout>
</repository>
<repository>
<id>mavenSnapshots</id>
<name>Maven Snapshots</name>
<releases>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</snapshots>
<url>http://people.apache.org/repo/m2-snapshot-repository/</url>
<layout>default</layout>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>codehausSnapshots</id>
<name>Codehaus Snapshots</name>
<releases>
<enabled>false</enabled>
<updatePolicy>always</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>fail</checksumPolicy>
</snapshots>
<url>http://snapshots.maven.codehaus.org/maven2</url>
<layout>default</layout>
</pluginRepository>
<pluginRepository>
<id>mavenCentral</id>
<name>Maven Central</name>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</snapshots>
<url>http://repo1.maven.org/maven2/</url>
<layout>default</layout>
</pluginRepository>
<pluginRepository>
<id>mavenSnapshots</id>
<name>Maven Snapshots</name>
<releases>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</snapshots>
<url>http://people.apache.org/repo/m2-snapshot-repository/</url>
<layout>default</layout>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
</settings>
As I mentioned in passing in my original post, you also have to make sure that you (a) correct the version number of two dependencies in the pde-maven-plugin's POM, and (b) upload all the Eclipse plugins that the pde-maven-plugin needs to your local repository or to your project repository, using the eclipse:to-maven goal.
I have actually set up a Maven project to do upload Eclipse plugins repeatably and reliably in case we need to clean up and rebuild the project repository (e.g. after an upgrade of Eclipse). Here's the POM:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>....</groupId>
<artifactId>devclipse-to-maven</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Eclipse plugins to Maven Artifacts converter</name>
<url>....</url>
<description>....</description>
<!-- ******************************************************************
Inherit from project grandparent pom
****************************************************************** -->
<parent>
<groupId>....</groupId>
<artifactId>configuration</artifactId>
<version>1.8</version>
</parent>
<!-- ******************************************************************
Environment Information
****************************************************************** -->
<scm>
<connection>
scm:svn:http://..../devclipse_to_maven/
</connection>
<developerConnection>
scm:svn:http://..../devclipse_to_maven/
</developerConnection>
<tag>HEAD</tag>
<url>
http://..../devclipse_to_maven/
</url>
</scm>
<!-- ******************************************************************
Build configuration
****************************************************************** -->
<properties>
<eclipse.source.dir>${env.M2_HOME}/../eclipse</eclipse.source.dir>
<eclipse.ext.dir>${env.M2_HOME}/../ext/eclipse</eclipse.ext.dir>
<eclipse.usr.dir>${env.M2_HOME}/../usr/eclipse</eclipse.usr.dir>
<eclipse.target.dir>${project.build.directory}/eclipse</eclipse.target.dir>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<id>copy-plugins-and-features</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks description="Grab the features and plugins from
the Eclipse SDK that have been found empirically to
be required as part of the target platform. We will
upload them to the Maven repository in the next phase.">
<mkdir dir="${eclipse.target.dir}/plugins"/>
<copy todir="${eclipse.target.dir}/plugins">
<fileset dir="${eclipse.source.dir}/plugins">
<include name="com.ibm.icu_*.jar"/>
<include name="org.eclipse.ant.core_*.jar"/>
<include name="org.eclipse.compare_*.jar"/>
<include name="org.eclipse.core.commands_*.jar"/>
<include name="org.eclipse.core.contenttype_*.jar"/>
<include name="org.eclipse.core.databinding_*.jar"/>
<include name="org.eclipse.core.expressions_*.jar"/>
<include name="org.eclipse.core.filebuffers_*.jar"/>
<include name="org.eclipse.core.filesystem_*.jar"/>
<include name="org.eclipse.core.jobs_*.jar"/>
<include name="org.eclipse.core.net_*.jar"/>
<include name="org.eclipse.core.resources_*.jar"/>
<include name="org.eclipse.core.runtime.compatibility.auth_*.jar"/>
<include name="org.eclipse.core.runtime_*.jar"/>
<include name="org.eclipse.core.variables_*.jar"/>
<include name="org.eclipse.debug.core_*.jar"/>
<include name="org.eclipse.equinox.app_*.jar"/>
<include name="org.eclipse.equinox.common_*.jar"/>
<include name="org.eclipse.equinox.preferences_*.jar"/>
<include name="org.eclipse.equinox.registry_*.jar"/>
<include name="org.eclipse.help_*.jar"/>
<include name="org.eclipse.jdt.core_*.jar"/>
<include name="org.eclipse.jdt.debug_*/**/*"/>
<include name="org.eclipse.jdt.launching_*.jar"/>
<include name="org.eclipse.jface.databinding_*.jar"/>
<include name="org.eclipse.jface.text_*.jar"/>
<include name="org.eclipse.jface_*.jar"/>
<include name="org.eclipse.osgi_*.jar"/>
<include name="org.eclipse.rcp.source.win32.win32.x86_*/**/*"/>
<include name="org.eclipse.swt.win32.win32.x86_*.jar"/>
<include name="org.eclipse.swt_*.jar"/>
<include name="org.eclipse.team.core_*.jar"/>
<include name="org.eclipse.team.ui_*.jar"/>
<include name="org.eclipse.text_*.jar"/>
<include name="org.eclipse.ui_*.jar"/>
<include name="org.eclipse.ui.console_*.jar"/>
<include name="org.eclipse.ui.editors_*.jar"/>
<include name="org.eclipse.ui.forms_*.jar"/>
<include name="org.eclipse.ui.ide_*.jar"/>
<include name="org.eclipse.ui.navigator_*.jar"/>
<include name="org.eclipse.ui.navigator.resources_*.jar"/>
<include name="org.eclipse.ui.views_*.jar"/>
<include name="org.eclipse.ui.views.properties.tabbed_*.jar"/>
<include name="org.eclipse.ui.workbench_*.jar"/>
<include name="org.eclipse.ui.workbench.texteditor_*.jar"/>
<include name="org.eclipse.update.configurator_*.jar"/>
<include name="org.eclipse.update.core_*.jar"/>
<include name="org.eclipse.update.ui_*.jar"/>
</fileset>
</copy>
<mkdir dir="${eclipse.target.dir}/features"/>
<copy todir="${eclipse.target.dir}/features">
<fileset dir="${eclipse.source.dir}/features">
<include name="none_*"/>
</fileset>
</copy>
<copy todir="${eclipse.target.dir}/plugins"
failonerror="false">
<fileset dir="${eclipse.ext.dir}/plugins">
<include name="org.polarion.team.svn.client.javahl.win32_*/**/*"/>
<include name="org.polarion.team.svn*.jar"/>
<include name="org.sf.easyexplore_*.jar"/>
</fileset>
</copy>
<copy todir="${eclipse.target.dir}/features"
failonerror="false">
<fileset dir="${eclipse.ext.dir}/features">
<include name="none_*"/>
</fileset>
</copy>
<copy todir="${eclipse.target.dir}/plugins"
failonerror="false">
<fileset dir="${eclipse.usr.dir}/plugins">
<include name="none_*"/>
</fileset>
</copy>
<copy todir="${eclipse.usr.dir}/features"
failonerror="false">
<fileset dir="${eclipse.ext.dir}/features">
<include name="none_*"/>
</fileset>
</copy>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>installation</id>
<phase>install</phase>
<configuration>
<eclipseDir>${eclipse.target.dir}</eclipseDir>
</configuration>
<goals>
<goal>to-maven</goal>
</goals>
</execution>
<execution>
<id>deployment</id>
<phase>deploy</phase>
<configuration>
<eclipseDir>${eclipse.target.dir}</eclipseDir>
<deployTo>${distributionManagement.repository.id}::default::${distributionManagement.repository.url}</deployTo>
</configuration>
<goals>
<goal>to-maven</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<!-- ******************************************************************
Project Information
****************************************************************** -->
<organization>
<name>....</name>
<url>http://..../</url>
</organization>
<developers>
<developer>
<id>ihu</id>
<name>Immo Hüneke</name>
<email>ihu@zuhlke.com</email>
<url>http://aspsp.blogspot.com/</url>
<organization>Zühlke Engineering</organization>
<organizationUrl>http://www.zuhlke.com/</organizationUrl>
<roles>
<role>architect</role>
<role>developer</role>
</roles>
<timezone>0</timezone>
<properties>
<picUrl>http://www.spaconference.org/cgi-bin/wiki.pl/?mugimmohuneke.jpg</picUrl>
</properties>
</developer>
</developers>
<inceptionYear>2008</inceptionYear>
</project>
Hope this helps. If you still have problems, please e-mail me your Maven output - I may be able to guess where it is going wrong.
Subscribe to:
Posts (Atom)