You know that thing that happens when you love writing unit tests so much that you completely forget the argument order of all the
Yeah it does happen, I’m sure it is not just me
You write everything in as
$this->assertEquals( function_call(), 'expected string' ); and then you get really confused when phpunit tells you it expects the wrong result from the function you haven’t implemented yet and received the expected result.
Good, I managed to write tests the wrong way round for two days before I realised and I wasn’t looking forward to going through and correcting them all, but then I remembered the power of
sed and started cooking a recipe up.
It turned out pretty simple:
sed -i "s/\(.*assert[^(]*\)( \([^,]*\), \([^)]*\) );/\1( \3, \2 );/" file_of_tests.php.
What it does:
- Capture the start of the line up to the call to the assert function in ‘\1′
- Capture the two function arguments in ‘\2′ and ‘\3′
- Rewrite the matched line with the arguments switched
That was fun!
Every now and then I find myself needing to quickly analyse a set of access_log files to see who the most common visitors are so that I can decide if there are any abusers I should be blocking or poorly configured services running somewhere that I can try to get fixed. I can never remember the quickest was to do this so I decided to write down the “one liner” that I cobbled together this time so I can hopefully find it next time and not have to reinvent the wheel again.
Here is the one-liner I used to find the top IPs this time:
sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d access.log | sort | uniq -c | egrep -v "\s([0-9]|[0-9][0-9]|[0-9][0-9][0-9]) "
Splitting this out we have:
- A call to <code>sed</code> to extract all the IP Addresses from the access_log file
- A call to <code>sort</code> to sort the list of IPs
- A call to <code>uniq</code> to create a list of unique IPs with counts
- A call to
egrep to filter the unique list down to IPs we at least 1000 appearances – this will need tuning depending on the volume of requests / time period the file covers.
As I sit here blaming code to find the original source of a line of code I’m beginning to think that
svn needs a new improved version of
spelunk it would work something like this:
$ svn help spelunk
spelunk (curse, showup, show): Output the content of specified files or URLs with the original revision and author information in-line ignoring white space changes and following movement of code between files.
Yes – I know I am dreaming
I’ve been wondering for a while if there was a good way of reverse engineering the meaning/function from a complex Regular Expression pattern such as the one used in the
make_clickable function in WordPress. This morning while debugging an issue with this function causing occasional segfaults in php I started searching around for a suitable tool and found YAPE::Regex::Explain to be the only reasonable solution.
Read the rest of this entry »