Saturday, September 16, 2017

Taxpayers should pay authors for educational uses of works, not intermediaries

Replying to a Letter to the Editor in The Varsity.


It is taxpayers and authors that are paying the costs of this ongoing dispute, one way or the other.

What we are effectively discussing is a government funding program masquerading as copyright, and because of the misdirection that this is a copyright issue we are allowing intermediaries like educational institutions, collective societies, foreign publishers, and all their lawyers, to extract the bulk of the money.

If Mr. Degen was focused on Canadian authors getting paid he would be agreeing with me that we need to redirect taxpayer money misspent with the current regime towards a program similar to the Public Lending Right. The existing Public Lending Right funds authors based on their works being loaned by libraries, and a "Public Education Right" could directly fund authors based on specific uses of their works in publicly funded educational institutions. This would be applied only to that very narrow area of dispute between what educational institutions (IE: taxpayers) are already paying, and the clear and indisputable limitations of copyright.

Nearly all of what educational institutions use is already paid for, through payments via modern databases and other established systems. This includes the ongoing growth of Open Access. It is Access Copyright that has refused to allow the payment of transactional fees for the narrow area under dispute.

While Access Copyright had a victory with this specific lower court case, they will lose on appeal as they have lost other related cases. This area of law is quite clear, and contrary to Mr Degen's misdirection have not been on side with Access Copyright's interpretation of the law. This specific case is the outlier.

While the majority of the blame for this costly dispute lies with Access Copyright, that doesn't mean taxpayers or governments should be siding with educational institutions. We should be removing all of these unnecessary intermediaries from the debate entirely.

By fighting for Access Copyright's conflicting interests rather than authors, Mr Degen is pushing for policies which continue to reduce the revenues of authors. My hope is that he will eventually side with authors.

Friday, June 9, 2017

IIIF.io : the hardest part will be saying "no".

Back in April I noted Canadiana is working on adopting APIs from IIIF, the International Image Interoperability Framework. We did a small demo in May as part of our participation at Code4Lib North.  Today is the final day of the 2017 IIIF Conference hosted at The Vatican, and this is an update on our progress.

What have we done so far?

We have a Cantaloupe Docker configuration on GitHub that we used for the demo.  This has the delegates Ruby script which finds the requested image within the AIP stored on the TDR Repository node which Cantaloupe is running on.

We have created a pull request for OpenJPEG to resolve an incompatibility between OpenJPEG and Cantaloupe. The fix allows Cantaloupe to offer access to our JPEG2000 images.

We will be integrating the OpenJPEG fix and some cleaner configuration into our Cantaloupe Docker configuration soon, bringing this Docker image closer to being worthy of being installed on production servers.

Our lead Application Developer, Sascha, created an application (with associated Docker configuration) that offers the IIIF Presentation API.  This reads data from the CouchDB presentation database used used by our existing platform.  It is expected that we will be adopting the IIIF structures for data within CouchDB at a later date, but this is a good intermediary step.

With these two Docker images running, accessing data from TDR repository pools and CouchDB, we are able to use any existing IIIF viewer to access Canadiana hosted content.

What is the largest stumbling block we've discovered?

We had already discovered the problem on our own, but the recent IIIF Adopters Survey made it clear.

Of the 70 institutions who completed the survey, 51 are currently using the IIIF Image API, 42 adopted IIIF Presentation, but The British Library and the Wellcome Trust are the only known institutions currently using the IIIF Authentication API.

Canadiana has both sponsored collections (where the depositor or other entity sponsored the collection which is then freely available to access) and subscription collections (where the funders have required we restrict access only to others who are financially contributing).  Making the sponsored collections available via IIIF will be much easier than the additional software we will have to author (including possibly having to help existing projects offering IIIF access tools) in order to support denying access to subscription collections.

Said another way: denying access will take far more of Canadiana's resources (staff and computing) than granting access.  Ideal would be if all our collections were sponsored, but that is not the environment we currently operate in.  At the moment a large portion of this charity's funding comes in the form of subscriptions, and this is already a topic of discussion within our board and membership.

This was not a total surprise.

We knew the move to a more modern distributed platform, which we were already planning before we decided to adopt IIIF, would involve a change in how we did authentication and authorization.  Implementing authorization rules is already a significant part of our technology platform.

Currently the CAP platform is based on a "deny unless permit" model, and there are only two public-facing software components: CAP which internally handles its own authorization, and COS which received a signed access token from CAP for each content request (Specific source file, specific rotation, specific zoom, limited amount of time, etc).  Only a few specific zoom levels are allowed, and there is no built-in image cropping/etc.


Implementing the same model for IIIF would have been extremely inefficient, even if possible to go through the multi-request Authentication API for each individual content request.

IIIF image access isn't done as one request for a single completed image but as multiple request for tiles representing parts of the image (and at a variety of zoom levels).  For efficiency we needed to move to a more liberal "grant unless denied" model where the access tokens are far more generic in what type of requests they would facilitate.

There are also several APIs that can (and should) be offered as different distributed web services. A service offering Presentation API data is likely to be deployed into multiple server rooms across the country, just as the Image API will be offered from multiple server rooms.   We may have fewer servers offering authentication, but that won't create a bottleneck as once a user has authenticated they won't need to go back to that service often (only when access has expired, or they need to pass access tokens to a new service).


We will be separating authorization from authentication, only checking the authentication token if required.  A new CouchDB authorization database would be needed that has records for every AIP (to indicate if it is sponsored or what subscription is required, and what level of access is granted), every user (what subscriptions they have purchased, or other types of access -- such as administrators) and every institution (subscriptions, other access).   Each content server request would involve consulting that database and determining if we had to deny access, with this data being replicated so it is local to each application which needs to use this data.

Where are we in our plan?

The plan was to migrate away from our existing Content Server first (See: The Canadiana preservation network and access platform for details on the current platform).  This would involve:

  • Adopting Cantaloupe for the IIIF Image API, including authorization.
  • Implementing the Authentication API, to set the Access cookie from the access token offered by the main platform website.
  • Implementing an IIIF Presentation API demonstration sufficient to test our implementation of the Authentication API with existing IIIF client applications.
  • Offer direct access to TDR files using the same Access cookie/token (Needed for PDF downloads as a minimum, also used by various internal microservices to access METS and other metadata records).
  • Retrofit our existing CAP portal service to use the Authentication API, as well as use Cantaloupe for all image serving.
  • Decommission the older ContentServer software on each repository node.
 With the Authentication API not as established as we thought, we may go a different route.


One possibility might be for Cantaloupe to grant full access to sponsored collections, and use a separate token similar to our existing token for subscription collections.   This would effectively disable most of the utility of IIIF for subscription content, other than allowing us to use the same ContentServer software for both types of content.

We haven't made decisions, only come to the realization that there is much more work to be done.   My hope is that we can push forward with making sponsored collections accessible via IIIF, even if we simply deny IIIF access to subscription collections in the interim (IE: CAP portal access only) while we figure out how to grant access to subscribers via IIIF.

IIIF isn't the only place we have this consideration

This consideration isn't unique to our IIIF implementation, and we come up against it regularly.

With the Heritage project the funding institutions sponsored public access to all the images from those LAC reels, but more advanced search capability was required to be a subscription service.   We implemented this in the shorter term by disabling (for non-subscribers) page-level search on the Heritage portal which hosts this content.

Some researchers and other external projects (some funded by Canadiana as part of the Heritage project, but that Canadiana technical staff were not involved in) have been collecting additional metadata for these LAC reels in the form of tags, categorization, and in some cases transcriptions of specific pages.  This data is being offered to us using project-specific data design that doesn't conform to any of the standards we plan on adopting in the future within the primary platform (See: IIIF annotations, with us likely extending our TDR preservation documentation to support encoding of specific open annotations).

Our platform doesn't yet have the capability to accept, preserve and provide search on this data. When we start a project to accept some of this data we will also have to figure out how to implement a mixture of funding models.  It is expected that most researchers will want the data they have funded to be open access, and would be unhappy if we restricted to subscribers search on their data.  This means we'll need to separate the subscription-required data funded by some groups with the open access search data provided by other groups.

It is likely we will end up with multiple search engines housing different types of data (search fields different from the common ones used within our primary platform), search-able by different groups of people, with a search front-end needing to collate results and display in a useful way.

Moving more of Canadiana's software projects to GitHub

As some of the links in recent articles suggest, we have started moving more of our software from an internal source control and issue tracker towards public GitHub projects.  While this has value as additional transparency to our membership, I also hope it will enable better collaboration with members, researchers, and others who have an interest in Canadiana's work.

For the longest time the Archive::BagIt perl module was the only GitHub project associated with Canadiana.  Robert Schmidt became the primary maintainer of this module when he was still at Canadiana, and this module is still critical to our infrastructure.


Added to the two IIIF related Docker images that I'll discuss more later are two PERL modules:

  • CIHM::METS::App is a tool to convert metadata from a variety of formats (CSV, DB/Text, MARC) to the 3 XML formats we use as descriptive metadata within our METS records (MARCXML, Dublin Core, Issueinfo).  This is used in the production process we use to generate or update AIPs within our TDR.
  • CIHM::METS::parse is the library used to read the METS records within the AIPs in the TDR and present normalized data to other parts of our access platform.  For more technical people this provides an example of how to read our METS records, as well as documenting exactly which fields we use within our access platform (for search and presentation).

My hope is that by the end of the summer all the software we use for a TDR Repository node will have moved to GitHub.  This provides additional transparency to the partner institutions who are hosting repository servers, clarifying exactly what software is running on that hardware.

We are a small team (currently 3 people) working within a Canadian charity, and would be very interested in having more collaborations.  We know we can't do all of this alone, which is a big part of why we are joining others in the GLAM community with IIIF. Even for the parts which are not IIIF, collaboration will be possible.

If you work at or attend one of our member institutions, or otherwise want to know more about what our technical team is doing, consider going to our GitHub organization page and clicking watch for sub-projects that interest you. Feel free to submit issue requests whether it be noticing a bug, suggesting a new feature (maybe someone with funding will agree and launch a collaboration), suggesting we take a closer look at some existing technology, or just asking questions (of the technical team -- we have other people who answer questions for subscribers/etc).

If not on GitHub, please feel free to open a conversation in the comments section of this blog.

Thursday, May 11, 2017

Canadiana JHOVE report

This article is based on a document written to be used at Code4Lib North on May 11’th, and discusses what we’ve learned so far with our use of JHOVE.

What is JHOVE?



The original project was a collaboration between JSTOR and Harvard University Library, with JHOVE being an acronym for JSTOR/Harvard Object Validation Environment.  It provides functions to perform format-specific identification, validation, and characterization of digital objects.


JHOVE is currently maintained by the non-profit Open Preservation Foundation, operating out of the UK (Associated with the British Library in West Yorkshire).




Standard JHOVE modules for AIFF, ASCII, BYTESTREAM, GIF, HTML, JPEG, JPEG2000, PDF, TIFF, UTF8, WAVE, XML, MP3, ZIP.

What is Canadiana doing with JHOVE?



As of the last week of April we generate XML reports from JHOVE and include them within AIP revisions in our TDR.  At this stage we are not rejecting or flagging files based on the reports, only providing reports as additional data.  We will be further integrating JHOVE as part of our production process in the future.

Some terminology




What did Canadiana do before using JHOVE?



Prior to the TDR Certification process we made assumptions about files based on their file extensions: a .pdf was presumed to be a PDF file, a .tif a TIFF file, .jpg a JPEG file, and .jp2 a JPEG 2000 file.  We only allowed those 4 types of files into our repository.


As a first step we used ImageMagick’s ‘identify’ feature to identify and confirm that files matched the file types.  This meant that any files added since 2015 had data that matched the file type.


At that time we did not go back and check previously ingested files, as we knew we would eventually be adopting something like JHOVE.


Generating a report for all existing files
As of May 9, 2017 we have 61,829,569 files in the most recent revisions of the AIPs in our repository.  This does not include METS records, past revisions, or files related to the BagIt archive structure we use within the TDR.


I quickly wrote some scripts that would loop through all of our AIPs and generate reports for all the files in the files/ directory of the most recent AIP revision within each AIP.  We dedicated one of our TDR Repository nodes to generating reports for a full month to get the bulk of the reports, with some PDF files still being processed.

Top level report from scan



Total files
61,829,569
Not well-formed
941,875 (1.5%)
Not yet scanned
253
Well-Formed and valid
60,828,836 (98.4%)
Well-Formed, but not valid
58,605  (0.09%)


JHOVE offers a STATUS for files which is one of:


  • “Not well-formed” - problems at the purely syntactic requirement for the format
  • “Well-Formed, but not valid” - meets higher-level semantic requirements for format validity
  • “Well-Formed and valid” - passed both the well-formedness and validity tests

Issues with .jpg files



Not well-formed
10
Well-Formed and valid
44,743,051
Well-Formed and valid TIFF
14


We had 10+14=24 .jpg files which were ingested prior to adopting the ‘identify’ functionality that turned out to be broken (truncated files, 0 length files) or that had the wrong file extension.  9 of the “Not well-formed” were from LAC reel’s where we were ingesting images from reels with 1000 to 2000 images per reel.

Issues with .jp2 files



Well-Formed and valid
11,286,315


JHOVE didn’t report any issues with our JPEG 2000 files.

Issues with .tif files



Not well-formed, Tag 296 out of sequence
1
Not well-formed ,Value offset not word-aligned
503,575
Not well-formed  , IFD offset not word-aligned
435,197
Well-Formed and valid
4,608,048
Well-Formed, but not valid  ,Invalid DateTime separator: 28/09/2016 16:53:17
1
Well-Formed, but not valid , Invalid DateTime digit
21,004
Well-Formed, but not valid  , Invalid DateTime length
3,483
Well-Formed, but not valid  , PhotometricInterpretation not defined
202


  • Word alignment (offsets being evenly divisible by 4 bytes) is the largest issue for structure, but it something that will be easy to fix.  We are able to view these images so the data inside isn’t corrupted.
  • Validity of DateTime values is the next largest issue.  The format is should be "YYYY:MM:DD HH:MM:SS" , so something that says “2004: 6:24 08:10:11”  will be invalid (The blank is an Invalid DateTime digit) and “Mon Nov 06 22:00:08 2000” or “2000:10:31 07:37:08%09” will be invalid (Invalid DateTime length).
  • PhotometricInterpretation indicated the colour space of the image data (WhiteIsZero/BlackIsZero for grayscale, RGB, CMYK, YCbCr , etc).  The specification has no default, but we’ll be able to fix the files by making and checking some assumptions.

Issues with .pdf files



Not well-formed , No document catalog dictionary
3,081
Not well-formed  ,Invalid cross-reference table,No document catalog dictionary
2
Not well-formed , Missing startxref keyword or value
8
Not well-formed  ,Invalid ID in trailer,No document catalog dictionary
1
Not yet scanned
253
Well-Formed and valid
191,408
Well-Formed, but not valid , Missing expected element in page number dictionary
33,881
Well-Formed, but not valid ,Improperly formed date
33
Well-Formed, but not valid , Invalid destination object
1



One of the board members of the Open Preservation Foundation, the organization currently maintaining JHOVE, wrote a longer article on the JHOVE PDF module titled “Testing JHOVE PDF Module: the good, the bad, and the not well-formed” which might be of interest.  Generally, PDF is a hard format to deal with and there is more work that can be done with the module to ensure that the errors it is reporting are problems in the PDF file and not the module.


  • “No document catalog dictionary” -- The root tree node of a PDF is the ‘Document Catalog’, and it has a dictionary object.  This exposed a problem with an update to our production processes where we switched from using ‘pdftk’ to using ‘poppler’ from the FreeDesktop project for joining multiple single-page PDF files into a single multi-page PDF file.  While ‘pdftk’ generated Well-Formed and valid PDFs, poppler did not.

    When I asked on the Poppler forum they pointed to JHOVE as the problem, so at this point I don’t know where the problem is.

    I documented this issue at: https://github.com/openpreserve/jhove/issues/248
  • “Missing startxref keyword or value” - PDF files should have a header, document body, xref cross-reference table, and a trailer which includes a startxref.  I haven’t dissected the files yet, but these may be truncated.
  • “Missing expected element in page number dictionary”.  I’ll need to do more investigation.
  • “Not yet scanned”.  We have a series of multi-page PDF files generated by ABBYY Recognition Server which take a long time to validate.  Eventually it indicates the files are recognized with a PDF/A-1 profile.  I documented this issue at: https://github.com/openpreserve/jhove/issues/161


Our longer term strategy is to no longer modify files as part of the ingest process.  If single-page PDF files are generated from OCR (as is normally the case) we will ingest those single-page PDF files.  If we wish to provide a multi-page PDF to download this will be done as part of our access platform where long-term preservation requirements aren’t an issue. In the experiments we have done so far we have found the single-page PDF output of ABBYY Recognition server and PrimeOCR validate without errors, and it is the transformations we have done over the years that was the source of the errors.

Sunday, May 7, 2017

Some of the earliest community groups on FLORA.org

Some of the earliest groups on Flora.org


  • Ask the Doctors, which was a real cool site managed by Rosaleen Dickson who I met from the Freenet. Involved in the publishing industry, she co-authored a book on HTML back in the early 1990's when it was such a new thing.  She hosted pages on Canadian Books, and following the work she did with the doctors ran a "Ask Great Granny" site.
  • I believe Auto-Free Ottawa was the first community group I hosted, a group of people in the early 1990's envisioning an Ottawa that wasn't as dependent on the automobile.
  • Canadian Homeschool is the last of the original groups to be hosted on FLORA.org
  • Community Democratic Action
  • KC. Maclure Centre
  • MAI-not
  • Ottawa District Committee of Ontario Special Olympics
  • Peace and Environment Resource Centre -- still around, but has had their own domain name for quite some time.
  • Pednet was a mailing list hosted by Majordomo, and then moved to Mailman.
  • Visually Impaired was, I believe, information hosted by Charles Lapierre
  • Westend Family Cinema also obtained their own domain name quite some time ago.


As web access became easier for organizations it was far more common for groups to get their own domain names so that they could move their sites between hosts without anyone having to remember a new URL.  There are many redirects still in the config files for such groups that were previously hosted on FLORA.org.


There were other groups over the years, but not all of them still exist such as:
  • Car Free Living
  • Communities Before Cars Coalition
  • Coop Area Network (Networking between Coop Voisins and the Conservation Coop)
  • Cycle Challenge / Commuter Challenge
  • Economic Good
  • Famous 5
  • FTAA Ottawa
  • FVC (Fair Vote Canada) Ottawa
  • Food Action Ottawa (FAO)
  • Global Education Network
  • Global Issues Forum
  • Green Party (Ottawa region, back when the party was smaller)
  • International Association for Near Death Studies Ottawa)
  • Maclure Center
  • National Capital Runners Association
  • OPIRG Forestry group
  • Ottawa LETS
  • Ottawa River Bioregion Group
  • Ottawa Transit Riders Association
  • Ottawa Vegetarian Society
  • The Doorstep Alliance

After doing a bit of spring cleaning of some sites where I can no longer reach the managers, and who haven't updated the sites in years, there were only 4 sites remaining: two from close personal friends, and two community groups.

For the sites that I couldn't reach the managers I had set them to redirect to the most recent archived version of their sites on Archive.org.   I made a mistake with the robots.txt file and they are temporarily unavailable, but I have sent a message to Archive.org in the hopes they can fix my mistake and restore the archive.

If there are any groups I've missed, please let me know in the comments.  It has been a few years, and I've been looking at old Apache config files to be reminded of some of the organizations.  I've not listed all the individuals (volunteers as well as individual election candidate websites from back when I hosted candidates during elections).

There have been many mailing lists over the years, but since this isn't something I'm planning on closing I won't get all nostalgic about them.  I'm keeping the domain name and will be keeping the redirects active for any sites that have moved so bookmarks can be updated.

Saturday, May 6, 2017

Winding down FLORA.org after more than 22 years.

FLORA Community Web was started in December 1994 (See Ottawa alternative community minded networking) and the first domain name it used was flora.ocunix.on.ca. Later the name flora.ottawa.on.ca (date unknown) was adopted, and then FLORA.org (13-Oct-1996).

It offered free websites and mailing lists for community groups from before these things were as easily available as they are today.  I haven't had time to spend on the server as I would like, and believe it would be best for me to admit that my interests have moved onward.   I'm in the process of helping the remaining groups hosted on FLORA.org to migrate to some other hosting.

Thanks go out to the many volunteers who participated over the years, and the many friends I made through these connections.



If you want to take a look at what the site looked like at various points in the past, Archive.org's Wayback engine has many snapshots.  The earliest list of flora.org organizations they have is a snapshot from 1998.  This was back when I listed some of the clients as well as volunteer sites hosted on the same computer.  A larger list of domains I was hosting on that computer can be seen from a 1999 list.


Some time in 2000/2001 most of those clients had been moved to OpenConcept where I was managing the growing number of computers, but Mike Gifford of OpenConcept was doing the billing, customer relations and all the business side.  In 2003 those servers became part of CooperIX which Mike Richardson and I founded. CooperIX was a small co-location provider for more technical clients, with OpenConcep and its growing client list being the biggest single user..

Wherever my self-employed company went, the volunteer FLORA.org services came along with me.

In 2011 I moved from being a self-employed consultant to being staff at Canadiana.org.  This was the first time I was an employee since I became self-employed in early 1995, but has been a great transition.  However, with the work I'm doing for Canadiana I don't feel I have the time to dedicate to the volunteer services, which are currently a couple of virtual machines on a server running in the basement of my home.

I finally decided this year that I should start the process of decommissioning those VMs.  I will start with the www.flora.org website, which is still managed via 1990's technology (content providers use FTP to log in to update their sites).  I will likely spend the time to migrate the mailman services to another VM, and keep them running as there are less security and other concerns with the mailing lists. I'll then decide what to do with my personal sites (my old business site, and so-on).


Saturday, April 29, 2017

Will the Conservative Party choose to fail like the US Democrats?

As a current Conservative party member I have received fundraising calls from the Conservative party.  The worst so far was from someone who started by talking about the leadership race, and I mentioned I have made a donation to my candidate.  She then spoke about how this increases the donation limit for the year : that I can donate to the candidate and to the party.

She then proceeded to list the party talking points, one of the first was how the Liberal carbon tax was a tax on everything.  To be honest, I didn't hear anything else she said as it was clear that there was no possibility I would be donating to my candidate -- Michael Chong -- and a party that was apparently campaigning against the only candidate that has resonated with me so far.  Michael Chong's revenue neutral green tax shift (from income to carbon) was the policy that made me take notice of the leadership race, and to learn about other policies we agreed on.

As politely as I could I hung up on her.
I also started to get emails from Maxime Bernier's campaign.  I never signed up for anything from his campaign, nor any of the other withdrawn candidates that endorsed him.  So I have to guess that the Conservative party itself added me to Bernier's list, as I opted-in to their list and to Michael Chong's list.

This reminded me of the US Democrats and how the party (staffers, fundraisers, etc) treated Clinton as "their candidate" even though Bernie Sanders was resonating with a whole new group of people who could have won the Democrats the Whitehouse.  Instead the party pushed (in ways some of us thought of as corruption) "their candidate" onto the ballot, and they managed to lose an election against a reality television star with no concrete policy ideas (just the run-of-the-mill angry political noise, what many are characterizing as "populism").

Has the Conservative party decided to give up the next general election and the possibility of their leader becoming the Prime Minister?  Maxime Bernier and I may have libertarian views, but the smaller you make government the more important it becomes what you believe must be left.  I do not recognize myself in what I hear from Maxime Bernier.

That isn't the same thing as saying that I won't be ranking him on my ballot.

The Conservatives will be using an advanced ranked ballot system.  Unlike a minimal-information single-X ballot, leadership voters need to think beyond who we most want to win the contest.  If we want to maximize the effectiveness of the ballot we need to rank all the way to who we would least like to win the contest.

While not focused on a single winner contest, an article by an Australian titled How To Best Use Your Vote In The New Senate System , specifically the question "I've numbered, say, 23 boxes and I don't like any of the other parties/candidates.  Should I stop now?", helps explain how to maximize a high-information ranked ballot.

A lot of voters - especially a lot of idealistic left-wing voters - are a bit silly about this and worry that if they preference a party they dislike they may help it win.  Well yes, but your preference can only ever reach that party if the only other parties left in the contest are the ones you have preferenced behind it or not at all! If that's the case then someone from that list is going to win a seat, whether you decide to help the lesser evils beat the greater evils or not. 

If Michael Chong becomes leader, I will be giving the Conservative candidate in my district a much closer look than I have in more recent elections.  Under our current electoral system it is the local candidate who needs to represent me, and their ability to do so is tied into the environment the party offers.  It is critical whether progressive conservative values (such as a green tax shift) will be embraced, or ignored by people who put blind ideology and slogans ahead of logic and evidence based decision making.

If Maxime Bernier becomes leader I know that no matter how good my local candidate is they won't be able to represent me, as a party that makes Mr. Bernier the leader will spend too much time chanting simplistic slogans and not enough time making evidence based decisions.

There is a line after which I won't be interested in voting for the locally nominated candidate as I will believe the party won't allow them to represent me.  As a progressive conservative (lower case as that party is gone) I can't support any of the social conservative candidates.  I think the party would lose many seats if one of the social conservative candidates won. I believe most Conservatives recognize that, so one of the social conservatives wining is unlikely.

If you exclude those who want to talk about their highly subjective idea of what constitutes "Canadian values" or "barbaric cultural practices", who do we have left?
I haven't made up my mind on all the candidates.  We still have 13 candidates, and only 4 more weeks to go.  I'm just very frustrated to learn that party staffers and volunteers are apparently campaigning against the best chance of the Conservative party leader being the next Canadian Prime Minister.