# conorjh

## Pain

John Wood (University College London) Pain, mechanosensation and sodium channels (Brambell Translational Neuroscience Seminar Series, 26/4/2012).

A talk about pain and analgesia: by identifying signaling mechanisms for pain we can find new drug targets, hopefully the neurons involved will be outside the blood-brain barrier so that the drugs will have fewer side-effects. As always with talks about metabolic systems and pathways I didn’t get much out of it. I learned that the somatosensory pathway has neurons which are pseudo-unipolar, meaning that the axons splits and goes in two directions

I learned that the origin of pain has been controversial with a dispute as to whether it represented by a neuron which always signals signalling beyond a threshold, or the onset of signalling by a neuron whose function it is to detect pain; the latter is now preferred. Finally, the interaction between the somatosensory system and the sympathetic nervous system is, in the view of the speaker, something that is becoming interesting again, classic work from the fifties and sixties should be revisited he suggests.

He introduced his talk, about chronic pain, with Frida Kahlo’s powerful painting The Broken Column.

## Movement of crowds and of urban traffic.

Anders Johansson (University of Bristol) Multi-scale human mobility (BCCS seminar Tuesday, 24/4/2012).

A talk about how people move, with sections about people walking in crowds and crowd safety, for example, at the Hajj and about traffic and road usage. It was an overview talk so hard to summarize, one of the most interesting pieces was a video of a large crowd at the Hajj walking to Ramy al-Jamarat for the stoning of the Devil. In the video the density had been color coded, so that back-ward moving stop-go waves were clearly visible. This is apparently the source of danger, you should avoid large unsegregated crowds where complicated internal crowd dynamics occurs, so several narrower passages would be safer. The speaker and his team do crowd modelling by taking video and removing individual and replacing them with a modeled individual but leaving the other members of the group as before, the error is the displacement of the modeled individual from the position of the individual they replace, this is then optimized by changing the rules governing the movement of the model. The model is basically a speed adjustment coming from a function of the displacement of nearby individuals, it depends on $r$ and $\theta$ but they assume the function is separable into a $r$ function and a $\theta$ function. It is hard not to worry that the error function is poor since it doesn’t take into account the adjustments the other individuals would’ve made in response to the movement of the modeled individual, but I suppose that doesn’t matter for small values. Anyway, there model accounts for lane-formation in crowds moving in two directions along a corridor. This is some interesting function of densities, with some sort of lane formation transition, there was a question about this, but it wasn’t really discussed. There was also mention of a crowd movement experiment, people were told they had to keep moving and keep within a meter of at least one other person, five people were secretly told to move in a specific way, this was enough to entrain a crowd of 100. Conversely, in fire alarm test, if the alarm is a simple bell people often pause and wait from someone else to move first and determine the route, if the alarm gives simple spoke instruction evacuation happens much quicker. There was also lots about traffic, with courier data for London, but not many details given.

## Direction cells.

Paul Dudchenko (Stirling) The neural encoding of destination and direction. (Brambell Translational Neuroscience Seminar Series, 19/4/2012).

Unfortunately I had to leave this early, so I missed most of the speakers own recent work, what I did hear was partly review and partly older work, which was still all new to me.

Rats are known to have direction cells, a given direction cell has a tuning curve and fires when the rat’s head is pointed in a specific direction. If the experimental arena has a visual cue that seems to be what the rat uses to orient itself, so if the area is circular with only one cue and rat is removed from the arena and the cue moved, the direction cell tuning curve is moved by the same amount. If there are no visual cues, the rat will develop a tuning curve anyway, but if a visual cue is introduced, the rat will incorporate this into its sense of orientation, so if more than eight minutes after the cue is introduced, the cue is moved, the direction tuning curve moves by the same amount.

Now, rats also use path integration to determine direction, if a rat is allowed to move from one box to another through a door, the tuning curve in the second box will be close in orientation to the one in the first, in fact, there is a precession of about 20 degrees. One interesting experiment compared visual cues and path integration. The rat spends time in one box (a), it was then moved to another (c) but only after being moved around enough the lab a bit in a box. The direction tuning curve was different in each box. Now, four boxes were linked by doors, the two boxes the rat had been in before, a and c, and two novel boxes, b and d, with b between a and c and d after c. The rat was allow to spend time in each box before a door opened allowing the rat to move into the next box. the door then closed. Path integration meant that the tuning curve in b was almost the same as in a, the question is, would c be like b, as dictated by path integration, or as it was before, as dictated by visual cues. The answer was the latter and when the rat moved into d its direction tuning curve there was similar to the one for c. Hence, memory of visual cues beats path integration!

## Some emacs commands I don’t use but want to start.

Here are some emacs commands I want to get in the habit of using:

Move forward or back by one page: ^v and Mv

Uppercase and lowercase a word: Mu and Ml

Capitalize a word: Mc

Uppercase and lowercase a selection: ^x^u and ^x^l, it makes a fuss the first time you use ^x^u since new users apparently find it confusing.

Repeat a command [#] times: ^u [#] [. . . ]

This is the hard one: skip to matching bracket forward is M^f, backwards is M^b but it for { . . .}x your cursor has to be on the x.

Also, did you know there is a mark ring: ^x^x brings you to the mark, ^u^spc moves you back through the mark ring, popping the values as you go.

## Relational functions, classes and sort in C++

If you are defining your own relational function for use in sort, why do it have
to be a global function, rather than a class method.

Why is it that this works:

#include <cstdlib>
#include <vector>
#include <algorithm>
#include <iostream>

using namespace std;

bool comp(const int & x,const int & y){return x>y;}
class Foo
{
public:

Foo(vector<int> a){this->a=a;sort(this->a.begin(),this->a.end(),comp);}
void print(){for(unsigned int i=0;i<a.size() ;++i){cout<<a[i]<<" ";}cout<<endl;}
private:
vector<int> a;
};

int main()
{
vector<int> a(3,3);
a[1]=1;
a[2]=2;

Foo foo(a);
foo.print();
}

when what's below doesn't.

#include <cstdlib>
#include <vector>
#include <algorithm>
#include <iostream>

using namespace std;

class Foo
{
public:
bool comp(const int & x,const int & y){return x>y;}
Foo(vector<int> a){this->a=a;sort(this->a.begin(),this->a.end(),comp);}
void print(){for(unsigned int i=0;i<a.size() ;++i){cout<<a[i]<<" ";}cout<<endl;}
private:
vector<int> a;
};

int main()
{
vector<int> a(3,3);
a[1]=1;
a[2]=2;

Foo foo(a);
foo.print();
}

Another question is, if the objects being compared, the object you have a vector of, are themselves a class, why can't you just overload the "<" operator in the class?

## A perceptron for finding a hyper-exponential distribution.

Recently I have been looking at some data, jitter data for spike trains, which may have a hyper-exponential distribution:

$p(t)=\sum p_i s_ie^{-s_i t}$

The idea is that there is a probability $p_i$ of event type $i$ which is in turn exponentially distributed with rate $s_i$. The $p_i$s sum to one. It is hard, even with a ton of data, to fit the parameters, I thought I might try using a perceptron as a way to do this. I started by changing the sum to an integral so

$p(t)=\int_0^\infty f(s) s e^{-st} ds$

which looks a lot like a Laplace transform, though it is hard to know what to do with that. Now, this means

$P(t_1\le t \le t_2)=\int_0^\infty f(s) \left[e^{-st_2}-e^{-st_1}\right] ds$

I imagine a situation where $f(x)$ is compactly supported and can be sensibly discretized $f_i=f(s_i)$. Thinking of the stuff in square brackets as input at the input nodes and the corresponding $f_i$ as weights, the predicted $P(t_1\le t\le t_2)$ is the output. The corresponding data values were found by interpolation with the $(a-1)$th and $(b+1)$th points and

$p=P(t_1\le t\le t_2) = \sum f_i \left[e^{-s_it_2}-e^{-s_it_1}\right] \delta s$

was calculated. The error is now

$E=p-\frac{b-a}{n}$

with $n$ the number of points. The learning rule is applied

$f_i\leftarrow f_i-\eta E \left[e^{-s_it_2}-e^{-s_it_1}\right] \delta s$

Evolve until happy.

It didn't work really, starting with some known $f(s)$ it evolves until the error is small and the predicted $p(t)$ looks a lot like the real one, but $f(s)$ doesn't look much like the input. The lesson seems to be that there are lots of ways to produce more-or-less the same distribution.

The code is at

https://sourceforge.net/p/percepthypexp/

## The Q-Q plot

So a Q-Q plot is a way of comparing two probability distributions:

http://en.wikipedia.org/wiki/Q-Q_plot

For different quantile values you plot the value of one distribution against the other. For sampled data this means if the two samples have the same sizes, you sort them and then plot the ith entry of one against the ith entry of the other, if one sample is a different size to the other, you need to do some sort of interpolation, in the simple C++ code below the quantiles are defined by the smaller sample and intepolation is used to find the corresponding value of the data with the larger sample. Obviously if two samples come from the same distribution, the points should more or less lie on the x=y line. If one is drawn from a distribution linear related to another, they will still be in a line, but not x=y.

I don’t really get it, I can see that it is a good visual aid and someone experienced with using these plots might find them very informative, but think about making from the Q-Q plot something quantitative and I assume you might as well do the Mann-Witney-Willcoxon U-test.

http://en.wikipedia.org/wiki/Mann-Whitney_U

I have some code called Q-Q_plot.cpp at:

https://sourceforge.net/p/mnlgeneralcode/

## Declaring two pointers.

It’s

Foo * bar1, * bar2;

note the extra star. Makes sense when you think about it.