Saturday, July 21, 2007
try to keep your mind to yourself until you see
what its like to want to destroy.
And if you have an anger, don't give much for men,
try to keep your hand behind your mouth unless
a greater love behind it grows.
Friday, July 13, 2007
It would appear I am completely mad. Or at least, I'm committing myself more and more every day to a course of action with ill-defined goals, fuzzy methods and little chance of success...but it's so damn sexy!
Still, people should be more easily classifiable. I can't see why everyone has to meld and unbind, non-conformist to the neat typological boxes I have prepared for them. Damn them all.
It could be that my supervisor is correct, and Pacman simply won't serve to distinguish players. Or it could be that everything will be fine, once the data is processed, and any current worries are just a matter of zoom perspective. Can't see the wood for the trees - but when I focus on the wood, I forget the nature of the trees.
Monday, July 09, 2007
Nevertheless, since neither can know what the other will do, the rational move is always to betray the other in the hope of going free, and both end up with middling punishments instead of the lightest. This is only altered when the game is iterated, so that punishment for betrayal (i.e. future betrayal on the part of the betrayed) becomes a decisive factor.
It is interesting both as a mathematical problem and a philosophical/ethical one. But the really interesting thing is that such a simple choice begets such complexity when presented at the same time to more than one decision maker (or agent, in the terminology). The reciprocal effects drive the potential complexity through the roof - in other words, trying to calculate, or take account of, the actions of the other party, knowing they are doing the same for you and they know that you are trying to predict their actions when deciding what yours will be, and so ad infinitum...but where does that break down when the agents are fallible humans? How much calculation can one human do?
This is what makes player modelling a hard problem - you can't reduce it to a case of agents.
*Although actually, the prisoner's dilemma, a new formulation by moi: wank or work out!
Sunday, July 08, 2007
Despite the corny sponsor's advertising and the elitism inherent in the opening blurb (which I'm sure most of the participants would reject) - these TED talks are really worth checking out. Not only are interesting things explained, but they're explained by experienced public speakers, so its good entertainment.
Anyway, I really liked this talk because Mr. Dennett (or Danny D, as I like to call him) clarifies some thinking that I've had for a long time, but it never crystallised for me until now. So check it out...
Thursday, July 05, 2007
"Are you righteous? Kind? Does your confidence lie in this? Are you loved by all? Know that I was, too. Do you imagine your suffering will be any less because you loved goodness and truth?"
Why does a person exist? What reason can there be to be conscious, feeling, capable of reason and emotion? Do these things, the trappings of the self, lend any value to our lives?
There is something to be said for wisdom, for the slow evolution of knowledge into greater understanding, compassion, and even altruism. For the individual, this can be a lifelong aim. But what are its larger social implications? For if living a better and better life is the goal of life, this is no more intrinsically valuable on the very large scale than living a horrible, miserable life. Because really, if the overall aim of higher learning becomes greater altruism, we've simply recursed back to propagation of the species. And if one seeks a goal beyond the self to motivate the betterment of the self, what need to look so far as wisdom? Build a family, a community to house it and leave some happy DNA behind. Who is to say that is greater worth to good works on a large scale than to a good life on a small scale?
Can we look beyond our selves and our DNA propagation for motivation to betterment? Progress alone seems to be a moonwalk, desperately racing to stay still. We cure our diseases, improve our nutrition, live longer and slowly kill the environment that is the backbone of our existence. Will we end up in UV-shielded floating biodomes, eating goop produced by the bacteria we trail beneath us in giant permeable microhorticulture farms, far enough below the surface of the ocean to protect their unicellular structures from the sunlight that kills everything else with now-unreflected radiation? [I wonder, would that work?]
So, what are we doing it all for? What is there beyond our own survival?
We theorise the self to be an evolutionary advantage, a cognitive capacity that allows non-instinctive behaviours and thus progressive goal-making beyond simple personal survival. But as noted, if you look closely enough, its hard to prevent these higher goals from looking recursive. If we had outlived any other species, that we weren't directly responsible for the extinction thereof, we might be able to say that consciousness is a mutation that was a true advantage in the evolutionary stakes. But on the evidence, all we can say is that in our class of lifeform, it will possibly be responsible for our being the last of the extant lifeforms.
So what is your self-hood there for? Why has this collaborative collection of cellular robots evolved to embody a higher order mind?
Who are you?