Dan met with Rob, Henry and Frank from the ONS data visualisation team thanks to an invite from Oli. All great stuff.
Dan, Robert, Graham and Michael met with Rachel Prosser from the New Zealand department for internal affairs to talk about “machine readable legislation”. There was vague trepidation that this might turn into some kind of Skynet thing with legislation poking out, AR style into the world. Like a Matrix but powered by nannies.
But the conversation quickly split into two chunks:
that it’s neither possible nor desirable to make all legislation legible to machines (given interpretation and case law and common law and the preference for being judged by humans please) but some basic regulations could probably be captured. Questions like, I would like to sell this type of thing, would I have to pay VAT? Also provenance between official regulations (like traffic speeds) and consumer end points (like sat navs) and what fuzziness exists in current gaps. Rachel’s looking for the simplest thing in order to kick off some proof of concept work.
that if all legislation can’t and shouldn’t be made machine readable, it should at least be made more legible to citizens. And indeed lawyers. At the moment you can be looking at one piece of legislation that’s been revised by a second piece of legislation and possibly a third and etc and have no idea what changes have been made without also reading all the revising legislation. Talk turned to hypertext and transclusion as is our wont and the idea of revisions being included in revised documents via hypertext transclusion. Rachel mentioned that New Zealand has an Act that functions almost as a glossary of terms; when any piece of legislation says X it means this. The UK does not have such a thing. We also chatted about registers as a kind of definition file that could be linked to and transcluded into legislation.
It was probably the most interesting meeting of the week and we came up with a list of other people who would probably have good reckons: Richard Pope, Paul Downey, Libby Miller, Jeni Tennison and Rachel Coldicutt.
On a similar theme, Robert spoke with a clerk in the Journal Office about how we might move from document-first to information-first publishing.
Dan continued to attend assorted agile ceremonies across the department. The main one this week was the technical difficulty vs value to users session led by Colin. Collective thinking is definitely developing but we still need to do some work to make sure ‘work to be done’ is aligned with the high level user needs that have been identified.
Dan also went to Emma’s website road map delivery meeting. The main decision was to open up the new website to search bots and the corresponding need to commit to stable IDs in ‘weeks’ (said Jamie). Dan is going to set up a meeting to explain why we can’t really filter search results for our current web content.
Michael continued to investigate Oli’s candidate database via the medium of prodding it with a Rails app. There’s been some murmuring that this might come in the direction of the Data and Search team so we wanted to check our data models can cope. On Wednesday, Anya, Silver and Michael met with Oli to chat through the details of his relational database. Some changes were made to the Election Ontology to capture a few things that were missing:
Individual elections now have a declaration time.
A Boundary set class was added.
Some existing predicates were reversed to make it more legible.
Constituency Group was moved from between Constituency Area and Electorate when Oli pointed out that population stats are recorded on a completely different cycle to elections.
Recorded date was added to Population.
Michael and Samu met with Colin and Steve to chat through user needs and constraints for adding voting records to member pages on the new website. Plans were made to initially target professional users.
Michael met with Ed to chat about committee data and what we do next.
Anya, Silver and Michael met Angela to check our modelling of select committees against the data in existing systems. We think we’ve captured everything worth capturing in the Committee Office Database (used in the House of Lords to keep track of inquiries and evidence sessions). We still need to check against Red Book (used in the House of Commons for similar purposes).
Anya and Michael continued to revise the first draft of the government department model with help from Chris Watson from the House of Commons Library. Michael also gave a quick presentation to the website team on where we’re at with modelling government departments and positions and incumbencies and some of the data quality issues we’re seeing.
Anya, Silver and Michael also spent some time tidying our slides for the Euro IA conference next week.
There was a preliminary meeting with people from the Ordnance Survey on Thursday morning to talk through some of the problems we’ve been having around location data. Mainly the absence of Northern Ireland postcode data, some missing links between NUTS regions and constituencies, and postcodes that span multiple constituencies.
Samu implemented content encoding in the API that sits behind the new website. Data travelling from the data service to the new website (and other public consumers) is now compressed, resulting in faster queries and an improved user experience.
Chris set up a local instance of Webvowl for use when the main site is down (which it frequently is).
Jianhan gave a presentation of his work on accessing on-premise databases from the data platform. He also created a spreadsheet of external website links for members and populated it with data from the web team. This will be ingested by the data platform and served to the new website (and beyond).
The search product team spent some time analysing and clustering the feedback gathered since search went live.
Robert helped with some new search ‘support’ issues. Most of his week was spent looking over the search feedback, being told how to do that and then that happening. There was a lot of comment and analysis.
Raphael (and a bit of Ben) went along to THINK AI for the Public Sector. Michael poured some scorn.
Liz showed a first draft workload report to one of the heads of section in the House of Commons library. The data is pulled out of the library enquiry service, automatically refreshing on a schedule via a gateway 8 times a day.
Sara published a report on Search API performance. She also met with Lopa to discuss performance analytics and to make sure the data and search team can access beta and current website datasets.
Tags have now been added to the website to track specific events (for example people choosing to revert to the old search). With these in place we can now follow individual user journeys.
After spending a part of last week helping a political science PhD student from the Federal University of Minas Gerais get data on gender representation in the House of Commons, Mike received a note to say, “You have no idea how much time and money you’ve saved me - I really can’t thank you enough.” Thanks also to the usual expert help from Joe Foster.
Robert went along to Julie’s capability reboot session which was much easier to understand than the search analysis. Especially thanks to the new improved Trello setup.
He also attended an Inclusive Recruitment Masterclass all day on Wednesday.
Dan met with Rupert, king of the architects, to talk about getting a new architect for our internal, ‘corporate’ data work - what we do over the coming years and things of that nature.
One person said blockchain ¯_(ツ)_/¯
No strolls were reported. Stroll harder. Stroll smarter. Be cold, if not bold.