Saturday, 15 March 2008
BCS survey underlines laxity of Government data protection
As if more evidence were needed for the undesirability of a Government-run national identity register, Channel 4 News reported last night that a British Computer Society survey had demonstrated a total absence of data accuracy audits or data correction budgets in 14 out of 14 UK Government departments.
Monday, 10 March 2008
Other models for identity registration
A story appeared on Slashdot today comparing the UK and US identity database schemes with the Jukinet system that's been quietly running in Japan since 1992. Some of the responses are quite insightful and informative. People describe systems that exist in Bulgaria, Japan, Sweden, Norway and other places, each of which have some things to recommend them.
For me, I think the main lesson is that yes, my personal data is already held in many different databases, both of government agencies such as the tax authorities and of private companies such as my credit card issuer. However, data protection legislation exists explicitly to prevent anyone, government or private, misusing this data by combining disparate databases to build a "profile" of me as an individual and to use that to my advantage or disadvantage. I fundamentally object to paying a huge amount of money so that this government can ride roughshod over those rights of the citizen. It isn't so much a question of privacy, more of protecting the individual against the might of the state.
I quite like the idea of an identity service as something that people can subscribe to if they wish - much like the trust providers used in a Public Key Infrastructure (PKI). There should be a free market in personal authentication, just as there is on the Internet. This would drive down prices and encourage the development of value-added services.
For me, I think the main lesson is that yes, my personal data is already held in many different databases, both of government agencies such as the tax authorities and of private companies such as my credit card issuer. However, data protection legislation exists explicitly to prevent anyone, government or private, misusing this data by combining disparate databases to build a "profile" of me as an individual and to use that to my advantage or disadvantage. I fundamentally object to paying a huge amount of money so that this government can ride roughshod over those rights of the citizen. It isn't so much a question of privacy, more of protecting the individual against the might of the state.
I quite like the idea of an identity service as something that people can subscribe to if they wish - much like the trust providers used in a Public Key Infrastructure (PKI). There should be a free market in personal authentication, just as there is on the Internet. This would drive down prices and encourage the development of value-added services.
Friday, 7 March 2008
Latest ID Register scam
In a desperate bid to ram through its controversial plans for a national identity register and the associated ID cards, the British government has announced yet another defenceless section of society to be targeted: students.
Not content with making university students pay over the odds for what are frequently sub-standard educational opportunities, the government now plans to "encourage" them to supply personal details including biometric data "voluntarily" in order to "help" them access educational services. Home Secretary Jacqui Smith claims that "young people who register for an ID card will find it easier to enrol on a course, apply for a student loan or open a bank account". The implication is that if they do not agree to register, they will find it hard or impossible to enrol on a course, get a student loan or open a bank account.
If that doesn't amount to coercion, I don't know what does. As a parent of two students currently at university, I feel very strongly about this. This is very reminiscent of the kind of vindictive tactics employed by the morally bankrupt East German government in the communist era. Moreover, the government expects to add me (an EU national resident and working in the UK) to its database automatically in a few years time "unless I opt out", which I certainly intend to do.
A series of high-profile security breaches recently proved that governments generally, and this one in particular, cannot be trusted to handle personal information securely or to refrain from using it as a means of coercion. The sausage-slicing approach it has adopted for introducing the scheme is evidence that it knows it faces a massive revolt if it were to apply the same rules to the whole population in one go. By bringing in the scheme in this sneaky, insinuating way, the government makes me more convinced than ever that its motivation for the identity register is anything but the publicly stated one of making us "confident that other people are who they say they are" - security experts have already shown that an ID card of the current design will be relatively easy to fake, so that pretext doesn't persuade me at all.
I don't normally subscribe to conspiracy theories but what else can you believe in the face of this government's announcements and actions?
Not content with making university students pay over the odds for what are frequently sub-standard educational opportunities, the government now plans to "encourage" them to supply personal details including biometric data "voluntarily" in order to "help" them access educational services. Home Secretary Jacqui Smith claims that "young people who register for an ID card will find it easier to enrol on a course, apply for a student loan or open a bank account". The implication is that if they do not agree to register, they will find it hard or impossible to enrol on a course, get a student loan or open a bank account.
If that doesn't amount to coercion, I don't know what does. As a parent of two students currently at university, I feel very strongly about this. This is very reminiscent of the kind of vindictive tactics employed by the morally bankrupt East German government in the communist era. Moreover, the government expects to add me (an EU national resident and working in the UK) to its database automatically in a few years time "unless I opt out", which I certainly intend to do.
A series of high-profile security breaches recently proved that governments generally, and this one in particular, cannot be trusted to handle personal information securely or to refrain from using it as a means of coercion. The sausage-slicing approach it has adopted for introducing the scheme is evidence that it knows it faces a massive revolt if it were to apply the same rules to the whole population in one go. By bringing in the scheme in this sneaky, insinuating way, the government makes me more convinced than ever that its motivation for the identity register is anything but the publicly stated one of making us "confident that other people are who they say they are" - security experts have already shown that an ID card of the current design will be relatively easy to fake, so that pretext doesn't persuade me at all.
I don't normally subscribe to conspiracy theories but what else can you believe in the face of this government's announcements and actions?
Thursday, 6 March 2008
More on the Maven POM for PDE headless builds
It turns out that you can override almost any property in the build.properties file using an equivalent plugin configuration item within the buildProperties section.
Moreover, by observing certain conventions the job is made much easier. One of these is that the PDE build takes place within a "build directory" that contains subdirectories named "features" and "plugins", and that the product's project directory (where pom.xml resides) is one of the subdirectories of "plugins". Therefore, in the Maven POM, it would be logical to define buildDirectory as "../..". Here's a possible approach:
Currently I am working on a Maven plugin that will assemble the Eclipse PDE target platform automatically from project dependencies. I'll document this here once it's working.
Moreover, by observing certain conventions the job is made much easier. One of these is that the PDE build takes place within a "build directory" that contains subdirectories named "features" and "plugins", and that the product's project directory (where pom.xml resides) is one of the subdirectories of "plugins". Therefore, in the Maven POM, it would be logical to define buildDirectory as "../..". Here's a possible approach:
<build>Please note: we use a convention that the Maven installation is alongside the Eclipse IDE installation, hence the definition of eclipseInstall above. We also define the maven.work.dir property in the settings.xml file - this is where all the stuff goes that Maven needs, e.g. the local repository and the Eclipse PDE target platform.
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>pde-maven-plugin</artifactId>
<version>1.0-alpha-2-SNAPSHOT</version>
<extensions>true</extensions>
<!-- Custom lifecycle configuration -->
<configuration>
<eclipseInstall>${env.M2_HOME}/../eclipse</eclipseInstall>
<pdeProductFilename>prototyp.product</pdeProductFilename>
<pdeBuildVersion>3.3.2.R331_v20071019</pdeBuildVersion>
<buildProperties>
<base>${maven.work.dir}</base>
<baseLocation>${maven.work.dir}/eclipse</baseLocation>
<buildDirectory>${basedir}/../..</buildDirectory>
</buildProperties>
</configuration>
<!-- Also bind to mvn clean -->
<executions>
<execution>
<id>clean-pde</id>
<phase>clean</phase>
<goals>
<goal>clean</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Currently I am working on a Maven plugin that will assemble the Eclipse PDE target platform automatically from project dependencies. I'll document this here once it's working.
Sunday, 2 March 2008
Business Process Transaction Monitoring (BPTM)
I went along to a meeting of the BCS Kingston & Croydon branch last Tuesday, at which a group of people from BT's Design group, who specialise in Systems and Application Monitoring and Management tools, revealed some astonishing achievements in a very low-key way, as if they had no idea how important they were.
These people have distilled their dozens of years of experience of managing increasingly complex distributed systems with few staff and fewer tools into a powerful yet spare vocabulary (or ontology, to use a fancy term) that efficiently describes the universe of discourse. It includes such concepts as server, virtual machine, date, time, business process, transaction, event-type and (very important) end-to-end correlation key, which precisely locates a reported event in a specific application component. All this, logically enough, is aligned with the ITIL standard for service delivery.
But not only that, they've defined binary, textual and graphical representations of log entries or event notifications that capture all this information, and a set of libraries that implement all of this and are accessed via a very simple API, which has been implemented by a standard code library (I understand that a Java implementation is available, but there may be support for other languages too). Not least, there is a defined process for integrating an application into the service monitoring and management framework.
Most applications already generate copious log information, and most commercial monitoring tools work by scanning the log files for interesting events. You have to configure patterns that allow the monitoring software to recognise different events. As a result, all large-scale monitoring infrastructures are permanently out of date with respect to the log formats and events generated by the applications, which are continually evolving. Moreover, the sheer volumes of log information generated mean that monitoring products that take this approach tend to be overwhelmed by the deluge of data and can find it difficult to react in a timely manner to real problem situations when they arise.
BT's BPTM takes a different approach: the application is "instrumented" by wrapping existing calls to the system logging facility, when it's much easier to identify the meaning of the logged information in terms of the underlying data model and to add any missing properties (such as system identifier, timestamp and e2e correlation key). As a result, team boss Ian Johnston claims that an average application can be instrumented in one day (preceded by a one-day workshop to identify the requirements of managing that application, and followed by another day to roll out and test the instrumented version of the code).
The BPTM library takes a "reporting by exception" approach to cut down on the amount of communication required. For example, events that are expected and that duly occur are merely logged locally by the application. This measure alone reduces the management data traffic by a factor of 20:1 on average. Then there are event correlation rules that can recognise typical failure scenarios and offer scripted diagnostic and remediation advice to support staff, many of whom are offshore.
By using this combination of approaches, the design group has already equipped over 80 separate applications in the "BT Matrix" or Service Oriented Architecture to be centrally monitored and managed. Newly instrumented applications are auto-discovered by the BPTM infrastructure - they simply hook themselves into the reporting network and pop up on the monitoring console (which is of course a rich Internet application).
Operators are alerted to emergency situations, such as service bottlenecks, via a variety of mechanisms. The primary user interface is a mimic diagram, which shows the flow of messages that make up an end-to-end business transaction through a series of components. The user can drill in to see both more detail and historical trend information, so that e.g. new server capacity can be brought on-stream before a bottleneck becomes critical.
It's obviously in BT's interest to publicise the BPTM standard so that more suppliers will start using it and building it into their products from the outset. But I don't think that Ian and his team are going about this in the right way yet. To build up momentum, it is not enough to hold occasional talks to BCS branches, where you reach at most 20 interested individuals at a time. You need to convince the solution architects and other decision makers that this is the right way to go. The first thing to do is to publish the standard, and simultaneously or not long afterwards, make the libraries that implement it Open Source. This should create a community of interest across the industry. After all, large service-oriented architectures are becoming increasingly common, in all market sectors, not just in telecoms, so the management headache is shared by all projects. Then some judiciously targeted white papers and articles should appear in the appropriate journals, and the trade press needs to be made aware.
If publicised in the right way, I can't see how this technology can fail to make waves.
These people have distilled their dozens of years of experience of managing increasingly complex distributed systems with few staff and fewer tools into a powerful yet spare vocabulary (or ontology, to use a fancy term) that efficiently describes the universe of discourse. It includes such concepts as server, virtual machine, date, time, business process, transaction, event-type and (very important) end-to-end correlation key, which precisely locates a reported event in a specific application component. All this, logically enough, is aligned with the ITIL standard for service delivery.
But not only that, they've defined binary, textual and graphical representations of log entries or event notifications that capture all this information, and a set of libraries that implement all of this and are accessed via a very simple API, which has been implemented by a standard code library (I understand that a Java implementation is available, but there may be support for other languages too). Not least, there is a defined process for integrating an application into the service monitoring and management framework.
Most applications already generate copious log information, and most commercial monitoring tools work by scanning the log files for interesting events. You have to configure patterns that allow the monitoring software to recognise different events. As a result, all large-scale monitoring infrastructures are permanently out of date with respect to the log formats and events generated by the applications, which are continually evolving. Moreover, the sheer volumes of log information generated mean that monitoring products that take this approach tend to be overwhelmed by the deluge of data and can find it difficult to react in a timely manner to real problem situations when they arise.
BT's BPTM takes a different approach: the application is "instrumented" by wrapping existing calls to the system logging facility, when it's much easier to identify the meaning of the logged information in terms of the underlying data model and to add any missing properties (such as system identifier, timestamp and e2e correlation key). As a result, team boss Ian Johnston claims that an average application can be instrumented in one day (preceded by a one-day workshop to identify the requirements of managing that application, and followed by another day to roll out and test the instrumented version of the code).
The BPTM library takes a "reporting by exception" approach to cut down on the amount of communication required. For example, events that are expected and that duly occur are merely logged locally by the application. This measure alone reduces the management data traffic by a factor of 20:1 on average. Then there are event correlation rules that can recognise typical failure scenarios and offer scripted diagnostic and remediation advice to support staff, many of whom are offshore.
By using this combination of approaches, the design group has already equipped over 80 separate applications in the "BT Matrix" or Service Oriented Architecture to be centrally monitored and managed. Newly instrumented applications are auto-discovered by the BPTM infrastructure - they simply hook themselves into the reporting network and pop up on the monitoring console (which is of course a rich Internet application).
Operators are alerted to emergency situations, such as service bottlenecks, via a variety of mechanisms. The primary user interface is a mimic diagram, which shows the flow of messages that make up an end-to-end business transaction through a series of components. The user can drill in to see both more detail and historical trend information, so that e.g. new server capacity can be brought on-stream before a bottleneck becomes critical.
It's obviously in BT's interest to publicise the BPTM standard so that more suppliers will start using it and building it into their products from the outset. But I don't think that Ian and his team are going about this in the right way yet. To build up momentum, it is not enough to hold occasional talks to BCS branches, where you reach at most 20 interested individuals at a time. You need to convince the solution architects and other decision makers that this is the right way to go. The first thing to do is to publish the standard, and simultaneously or not long afterwards, make the libraries that implement it Open Source. This should create a community of interest across the industry. After all, large service-oriented architectures are becoming increasingly common, in all market sectors, not just in telecoms, so the management headache is shared by all projects. Then some judiciously targeted white papers and articles should appear in the appropriate journals, and the trade press needs to be made aware.
If publicised in the right way, I can't see how this technology can fail to make waves.
A British company providing first-class products
I've been very happy with my DNUK Linux server, which I purchased in September 2001 to provide filing services, web hosting etc. to the whole family. It has been doing duty in the cupboard under the stairs day in, day out without complaint. With the exception of some disk errors that cropped up soon after I bought the machine, and which DNUK sorted out for me very satisfactorily very quickly under the warranty, yesterday was the first time ever it went wrong.
The power supply failed, basically. I managed to find a repair man locally who was able to fit a replacement in about 30 minutes. The old one had some kind of loose bits rattling around in it - probably glass from a blown fuse. Now that a new PSU has been fitted, it's back to work as usual. I fully expect to get another four years or more of use out of it.
The repair man was very complimentary about the build quality of the machine and the tank-like solidity of the chassis. I too was impressed, but then I am not really a judge of these things. All I remember was the pleasure of dealing with a company that was small enough to treat its customers like real people, yet had all the snazzy web-based product selection and customisation capabilities you would expect of a major supplier. And it was excellent value for money, too.
The power supply failed, basically. I managed to find a repair man locally who was able to fit a replacement in about 30 minutes. The old one had some kind of loose bits rattling around in it - probably glass from a blown fuse. Now that a new PSU has been fitted, it's back to work as usual. I fully expect to get another four years or more of use out of it.
The repair man was very complimentary about the build quality of the machine and the tank-like solidity of the chassis. I too was impressed, but then I am not really a judge of these things. All I remember was the pleasure of dealing with a company that was small enough to treat its customers like real people, yet had all the snazzy web-based product selection and customisation capabilities you would expect of a major supplier. And it was excellent value for money, too.
Subscribe to:
Posts (Atom)