Britney in 5 minutes

About 26-28 hours ago in #debian-release on IRC:

<nthykier> damn, a britney run in 5 minutes
<adsb> they happen
<adsb> you've been spoilt by never seeing b1 at her "finest" *cough*
<aba> you mean, running for more than a day?
<Ganneff> adsb wants night-long runs?
<aba> I can remember runs where we had to block certain packages to
      make sure the run could actually end *sometimes*
<adsb> I'm quite happy with just the memory of that sort of run,
       thanks :P
<nthykier> I don't mind being spoiled if it stays at 5 minutes :P

I took the liberty of collecting the resulting data for the Britney test suite. In its reduced state[1] in runs in 30 seconds on my machine.  It is already my favourite live data sample in the test suite.  😀

[1] Only i386 and amd64 are considered, manual hints are ignored etc.

This entry was posted in Debian, Release-Team. Bookmark the permalink.

4 Responses to Britney in 5 minutes

  1. I kinda stopped working on SAT-Britney, because I kept failing to speed it up. But you made me curious, so here are the timings with SAT-Britney (using the reduced set):

    Running live-2011-12-13… done (576.703s)
    Running live-2011-12-20… done (355.397s)
    Running live-2012-01-04… done (146.579s)

    As expected it is slower than britney2 (but hey, that’s what you get for completeness), but at least it seems to do the right thing. For comparison, here are the current britney2 timings on my machine:

    Running live-2011-12-13… done (264.755s)
    Running live-2011-12-20… done (128.561s)
    Running live-2012-01-04… done (44.766s)

    Extrapolating this, it indicates that for long rungs, SAT-Britney takes about twice as long, and for shorter runs there is a higher constant cost. (which is not surprising, as SAT-Britney does not treat “easy” cases individually).

    • I must admit I thought SAT-Britney was faster than Britney2 as well, so I was a bit surprised. 🙂 Then again, I thought Britney2 was slower than she is. 😛

      Unfortunately (as you suggest) Britney2 is probably not as smart as the SAT-Britney. One of the reasons why I collected the 2011-12-20 was because the Haskell hint you sent us. I figured it would be a good start for a real test-case for the auto-hinter. 😉

      Speaking of which, could I convince you to commit your resulting HeidiResult files (e.g. as “live-data/$TEST/sat-result”)? Eventually I need to improve the smarts of Britney2 and I suspect the SAT-Britney results are closer to the “real” answer than the currnet Britney2 results.

      • You got me wrong, I indeed expected SAT-Britney to be slower than britney2, but I was happy that it was not too far off…

        Hmm, only now I notice that the “expected” file for these tests is empty. And I was already glad that SAT-Britney gives the same results as britney2… Anyways, I’m running the tests right now and will commit them afterwards.

      • Right, make that s/as well//. What I wanted to say was that I had the misunderstanding on this based on a comment you made (on IRC months ago).

        In relation to the tests. I use them mostly as crash tests and regression tests (with the “test-branch” script). I should probably also fix the “Dates” files in these tests soon (it needs to be updated or the result may differ after 10 days of collecting the test).

        If you want to compare results, you can run Britney2 on the tests, copy the result files to live-data/$TEST/expected and set “Ignore-Expected” to “no” in live-data/$TEST/test-data. For added fun, you can throw in a couple of extra architectures as well. 🙂

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s