Tuesday, 18 June 2013

Acc. Chem. Fail.

Last week there was an online campaign to create pressure to publish negative results. The potential benefits are obvious, and are nicely summed up in the cartoon by Nik Papageorgiou: if we had a database (like Reaxys) of "stuff that doesn't work", we could all save time, effort, and money going down futile routes. Much of it has been coming from biological quarters, but it's something that chemists have proposed too.

(Quick disclaimer: I've not really been part of this conversation, so if my comments have been rebutted elsewhere please do correct me.)

I'm a little skeptical of whether it's really as simple as "publish your negative results!", at least when it comes to chemistry*. It's not enough to say "we tried these conditions and it didn't work". This isn't going to be too helpful; there are a million reasons why a particular reaction might not work in your hands (look at BlogSyn for a detailed example of this). For such a resource to be useful it has to be thorough: you have to try to pin down why your reaction doesn't work, and that's not a trivial matter.

The journal behind last week's campaign even say as much:
"For negative and null results, it is especially important to ensure that the outcome is a genuine finding generated by a well executed experiment, and not simply the result of poorly conducted work.  We have been talking to our Editorial Board about how to try to avoid the publication of the latter type of result and will be addressing this topic and asking for your input in a further post in the next few days."
I don't know how it is in the biomedical sciences, but in chemistry I'm not sure it's going to be clear up front why a reaction doesn't work. Ensuring that it is a genuinely negative result will take time, and is likely to be of limited interest to the wider community; understanding why the reaction doesn't work is yet more work, but will be much more useful.

Who is going to take the time and effort to really thoroughly study a failed reaction and figure out why it works except a methodology group that is already studying that chemistry in depth?

To illustrate my point I'll use an example from my own area, self-replicating molecules. In 2008, Vidonne and Philp reported an attempt to make a self-replicating rotaxane. I'm going to stop here to express my sincere admiration at the scale of a project like this. I'm not aware of any other attempts to achieve something like this; it blows my mind a little bit.


From Tetrahedron 2008, 64, 8464–8475.
The paper is a thorough study of the system and runs to 12 pages. They report careful planning and a detailed kinetic model; the synthesis and analysis of the system; and experiments to figure out exactly what is occurring in this complex system and why it didn't behave as expected. Judging by the abstract, this represents a huge portion of Vidonne's thesis.

This is the kind of detailed work needed to make negative results worthwhile - both to publishers and to other researchers. Anyone can break a reaction, but it takes time and attention to detail to turn that into useful knowledge.

That said, there are lesser steps that can be taken to get useful negative results into the literature without expending so much effort. For example, methodology papers could include tables of substrates or conditions that didn't work in their SI; synthesis papers could (and often do) discuss methods that failed for them.

An interesting alternative is the robustness screen recently proposed by Collins and Glorius. They describe a standardised 'kit' that may allow chemists to quickly get an idea of whether a particular set of reaction conditions is likely to tolerate functional groups and so on. One strength of this idea is that it would require chemists to report negative results: "our conditions tolerate A, B, and C, but are shut down by X, Y, and Z".

To sum up: it's easy to say "publish your negative results!", but in chemistry at least it's not clear that it's that straightforward. To be worth publishing, or worth anything, you have to have an idea why the results are negative, or negative results need to routinely reported alongside positive results.

What do you think negative results could contribute to chemistry? What information would you need for a negative result to be useful to you?


* to clarify: none of my comments are meant to generalise to all of science, or beyond organic chemistry, really. In other fields this may well be more straightforward.
** thanks a big huggy bunch to @PeONor and @craigdc1983 for having a look over this post before it went up.

6 comments:

  1. Negative data is not a big problem (or at least there are bigger problem than that)
    People are focusing on negative data while there are million of millions of "positive" data buried in old labbooks that will never see the light.

    When the goal of a project is not working/not proved, all the data collected, all the reactions, all the intermediates are not publishable. This doesn't mean that that stuff is not working, it's just that the whole story is not good enough for being published.
    And data, reactions and so on that are maybe useful for other people in the world are just dying in ancient crappy labbook around the globe.

    Do not push for publishing negative data. push for open lab book, push for publishing every single page of your labbook, push for a great repository of reactions.

    I'm on and off working on a project for open lab book, and I hope sooner or later to start with it (still looking for grants/sponsor and programmers :)

    ReplyDelete
  2. I can agree with Vittorio's comment on open lab books, but there are problems there with destroying possible future publications that are not solved yet, but I hope will be solved soon. In the meantime, I want to support the idea of publishing the data that would not get into regular publications, both positive and negative.

    I once did a project at a pharmaceutical company where I got access to the internal electronic lab books. The intention was to use the negative data to figure out the substrate scope of some commonly used reactions, and maybe get new mechanistic insight into when they would fail. The project was not a success, mostly because the quality of the lab books was not sufficient; multiple errors, like mixing up g and kg in the input (resulting in calculated yields close to 100000% ...) and reports of crude instead of purified yields (trials were not always worked up properly). However, I still believe there would be value in such a study, provided the failures were reported properly. There would be a need for some kind of standardized report (yes, an open notebook would be good), notification that gave computer-readable structures for database mining, and preferably also a unified identification of reaction type, with lead references.

    ReplyDelete
  3. That robustness screen is awesome. I've been wondering for years why there wasn't a standardized set of starting materials for new reactions.

    There's little more annoying than seeing a promising reaction that has only been tested on benzylic positions (with halogen, alkyl and protected oxygen substituents).

    ReplyDelete
  4. Agreeing with the others, I wonder if the most useful thing would be a system to report of syntheses of compounds that, for whatever reason, were made but never published in journal articles. Theses, are full of these - fully characterised and with properly written-up synthetic descriptions. I suppose what we would need for that is a different kind of publication - that could be searched, referenced, and give the author some form of credit. Not a new concept by any means - I suppose it needs something big behind it before it would work....

    I often wondered about a database of compounds that people have made and are languishing in vials looking for a user, but I suspect that that would/could never work.

    ReplyDelete
  5. Hi all, thanks for your comments.

    A common theme in all of these proposals is the need for cultural change amongst chemists. If we were to adopt 'open lab book' formats, as Per-Ola points out, there's the risk that the work might be rendered unpublishable (and certainly it'd complicate getting things patented!). I'm not sure how to even approach this problem: it requires a critical mass of researchers publishing in the open book format, and significant support from publishers (whether in helping disseminate open book data, or simply allowing it to be published).

    I like the idea of publishing individual protocols that work but are maybe not very well-developed, rather than letting them languish in lab books. Again, though, it requires a significant number of people to be on board, and a decent means of publication.

    The robustness screen is a cool idea but has a couple of possible limitations. If the reaction being tested has specialised or difficult to make starting materials, then it's not as trivial as Collins and Glorius portray it - though if the starting materials are that specialised it's questionable how general the reaction will be anyway. There's also the question of how meaningful the results are, and whether the result of adding a second functional molecule is going to translate to a molecule containing both functions - a problem they have clearly anticipated (fig 2 in the paper). The reason I like this approach over other methods of enriching our data is that it requires much less of a cultural change amongst chemists than, say, open lab books, or the kind of database proposed by ajsp.

    ReplyDelete
  6. Andrew,

    The compound database is a little too idealistic, perhaps, but I wonder if it would require THAT much of a change (cf. open lab books) to start to get thesis-fodder into the mainstream.

    In a lot of senses (including the literal) theses are published already. From most institutions they are available publicly in some (often rather fiddly) way, and from many they are freely available online. All we need is for people to start to make the information available in a searchable format (Chemspider was kind of set up for this kind of thing I suppose), and start taking citations of theses seriously.

    The main barriers to open lab books are, as you say, problems with patents and limitations on publication. In both cases there are mechanisms in most universities to "embargo" theses for fixed periods (ie. make them unavailable to the public) until such problems are solved. As long as the reaction submission respected these embargos, these problems should be avoided. The only limitations I can see are motivation, and a suitably geared-up repository (although ChemSpider is almost there for a lot of fields).

    One thing I don't know is how often references to theses are included in bibliographic stats (eg. Thompson h index). Once people start finding the references trickling in they would soon realise that it is worth their while participating. The feeling that you are writing an enormous document that NOBODY WILL READ is pretty awful, too. Perhaps getting people to use/be interested in would also be a motivation for students who produce this material.

    I wonder whether, if a few key institutions started requiring new reactions reported in a thesis to be posted on chemspider, or something of that ilk, it would provide sufficient impetus to get things moving? Perhaps they could be submitted to a responsible person at the Uni at the time of final thesis submission.

    Actually, I wonder if even a small but influential group like #chemclub could start things moving.......

    ReplyDelete