Random header image... Refresh for more!

Slight Indexer Change

I think I’m going to have the Indexer generate an index for prefix lengths 1, 2, and 3.  That way, it’ll be much easier to switch between them at generation time, which will let me play around more.  I don’t think it’s necessary to go to lengths of 4 or more, as those lengths are unlikely to produce the level of crazy that I’m looking for here.

Furthermore, I realized that selecting the suffix based on probability vs. even distribution is something that can be handled strictly at generation time.  Just pull the list of possible suffixes in to a set, and choose from there.  It’s not as efficient as pre-calculating the two types at generation time, but it’s easier to do.

Remember, it’s not about getting it done right, it’s about getting it done right now.

September 4, 2010   No Comments

Start of Day 2

At the end of the first day, I completed the engine.  You’re able to feed it text and get constructed gibberish back out.

There are several immediate goals:

  1. Set up persistence.  The indexer is useless if it can’t save and reload the collected strings.
  2. Allow different sources to be stored in different objects.  This will allow me to separate the words of Michelle Bachman from those of Ron Paul when I want to.  Of course, I don’t always want to.  Sometimes, I’m going to want some of that “You got Birther in my End the Fed” brand of crazy, so it has to support that, too.
  3. Have EccoChamber follow Michelle Bachman and Ron Paul.
  4. Preload the engine with required background knowledge.  The collected speeches of Ronald Reagan, the Declaration of Independence, the Constitution and Bill of Rights (redacting those UNAMERICAN SOCIALIST parts like the 1st and 14th Amendments), various writings of Ayn Rand, and, just for fun, random lyrics from showtunes.

September 4, 2010   No Comments

The Thought Processor

>That’s even worse than mine. Ah, he speaks Arabic and I was going to finish any of them, though…
>Sure, why not? ;) At 3:30 am, the whole cross gender thing again… and it’s like from the camera.
>It is. But I’m not even sure how I left yesterday. I hope threers’ not like some people think I’m in eternal debt to my headphones…
>I really need to gnaw on something.
>Ah, well, in that way. It’s not like this which make me hate him or anything (In fact, he doesn’t have individual fingers, just a hazard of the car during an accident. Seatbelts are good at getting 0 all the time when I have something and not walking around, you won’t be able to count at that picture from the left side of the future…
>Nope, that doesn’t work, blame sunspots.
>It means something?
>You’ve never played Castlevania 3??!?!?!?!?!!!!??????????

Back in college, one of the most entertaining class assignments I had was the one for Dr. Matthews’ 341 course, where we learned about hash tables.  The assignment involved reading in a text file from Project Gutenberg, running it through a processor, then spitting out a paragraph or two of English-like sentences.  They weren’t completely random rearrangements of words.  On the contrary, they usually had some form of coherent sentence structure, even though the sentences themselves were often incoherent ((And generally quite entertaining)).

The section at the top of this post is a sample of what this application produces.  The input was some old chat logs.  The output is blazing insanity.

This application, and its ability to produce English-like streams of utter nonsense will be at the core of the Echo Chamber.  Here’s how it works:

  1. Read in a source of text.
  2. Split the text into words.
  3. For i = 0 to words.length: wordTable[words[i]].Add(words[i + 1])
  4. For x = 0 to outputTerms: outputString + currentWord; currentWord = wordTable[currentWord][rand()];
  5. Print outputString.

Or something like that.

In other words, when processing the file, any time you come across a word, add the word that follows it to a list of words that are associated with the first word.  Then, when producing the output, select a word to get started, then print that word, and randomly select one of the subsequent words to use as the next word, then repeat the print/select process.

In other words, if you had the input “A B C B D E B A F”, your table would end up something like this:

A: {B F}; B: { A C D }; C: { B }; D: { E }; E: { B }; F: null

And you’d be able to produce wacky sentences like  “A F” or “A B A B A B C B D E B A F”.

Oh, forget it.  That’s too complicated.  Here’s a picture instead:

The key is that it’s not completely random gibberish.  It only contains words that were used in the original source, and more importantly, every pair of words was used in the original source.  So, if you see a string like “America the Hedgehog”, it’s because it started with “America”, randomly chose the word “the”, which was in the source due to the line “America the Beautiful”, then looked up words that followed “the” and found “Hedgehog”, because “Sonic the Hedgehog” was somewhere else in the source.  That’s what gives it a semi-English sentence structure, without any parts of speech analysis and without any knowledge about rules of grammar.  It simply strings together word pairs that a human once used, so verbs follow nouns, articles follow prepositions, and infinitives get split.

In other words, the whole thing is one big iterative “Before and After” puzzle from Wheel of Fortune.

A bunch of lazy people who want you to feel that lightheaded, anxious feeling again. It makes it uncomfortable. ;)

September 3, 2010   No Comments

Unintelligence Design

An attempt to make an automated right-wing lunatic will obviously have several interworking pieces.    I will attempt to replicate the thought process of a right-wing lunatic in a computer program.  ((Arguably, the processing in this program may, in fact, be more complicated than the thought process of an actual right-wing lunatic.))  There are three distinct steps to this process.  First, they must acquire information from other right-wing lunatics and various other sources of information.  Second, there is the blender-like garbling of the input data, twisting and distorting it to fit their world view, without regard to relationships between the data points or pesky concepts like “Reality” or “The Truth”.  Third, this garbled datastream is piped somewhere, where it then can become an input source for someone else.

The first stage will be fairly simple.  The wonderful world of technology provides easy access to all sorts of right-wing lunatics for use as input.  There’s Twitter, Facebook, RSS Feeds, etc.

The third stage will also be fairly simple.  It’s called Twitter.  Follow @eccochamber.

The second stage is where most of the work will be.  At its simplest form, this stage is merely a passthrough between steps 1 and 3.  No critical thinking, no verification, just “RT @Nonsense” and done.  That’s decidedly not what I’m going for here.  I want something that will regurgitate words and concepts, but not simply be an echo.  I need something that will produce semi-English sentences based on right-wing talking points, yet are not direct copies of someone else.  Moreover, these sentences have to seem real enough that they are indistinguishable from things that actual right-wing lunatics say, in order to make people believe that they were written by a human.  A slightly unhinged, reality-challenged human, but a human nonetheless.

That, of course, is the key to this project.  In order to prove that many right-wing lunatics are no better than mindless robots, I intend to create a mindless robot that can pass for a right-wing lunatic.

September 3, 2010   No Comments

I Have A Plan

Let’s see if this works…

March 1, 2010   No Comments

Learning From Mistakes

It made it through Round 1 and got a score of 56 points.  Time to take a step back and analyze what I’ve observed.

  • The biggest problem is losing track of the bomb.  It will be on track to catch the lowest bomb, when suddenly the detection will skip a beat and it’ll take off across the screen for something else and can’t get back fast enough to catch the bomb.  I have to fix this first.  I’m pretty sure I have outlier detection from the Pong game that I can reuse here.
  • The segmented calibration that I described back on the first day seems to be working fairly well.  That’s where the robot moves the paddle knob a known number of degrees, then the program sees how far that moved the buckets on screen, and calculates how many pixels a degree is.  This lets the program know that to catch the bomb that’s 70 pixels away, it will have to rotate the paddle 30 degrees.  I suspect that I will have to tweak the algorithm a bit, though.  Mainly, I think it will need to do several calibration turns to refine the numbers.
  • The response time is going to be a killer.  I have little doubt that the motor itself can move the paddle into position fast enough.  When it moves, it zips across the screen.  The problem is that it doesn’t get moving as fast as it needs to and there’s an unacceptable lag between commands.  I think I might still be seeing an echo of the waggle.  It’s not as visible, due to the weight on the spinner assembly now, but I think that delay is still there.  If I can fix some of the other issues, I suspect that the response time is going to prevent the robot from getting past around level 3.
  • The NXT has an inactivity timer.  The robot turned itself off in the middle of a game.  Gotta take care of that…
  • I don’t want to keep hitting the button to start a round.  I’m going to have to implement the button press.  Unfortunately, implementing the button press means I’ll have to change around how the PaddleController class works.  It’s a dirty hack right now, I gotta clean that up.
  • I like that Kaboom! is a much faster game cycle than Pong.  With Pong, I couldn’t always tell what was wrong right away.  Sometimes it took a minute or two to get the ball in a situation where something went wrong.  I’d implement a fix and try again, and again, it would take a minute or two.  Kaboom! doesn’t last that long.  The games at this point are less than a minute long, sometimes much less.  If the robot is going to lose, it’s going to lose fast.  It’s not going to end up in a cycle where the ball keeps bouncing around between the same two points over and over and over for five minutes.  If a game in Kaboom! lasts five minutes, then it’s an amazing success.

February 28, 2010   No Comments

Finite State Machine

I think I’m going to have to put together a finite state machine for the robot.  The robot for Pong had essentially two states:  “Playing” and “Totally Flipping Out”.  I need to do a better job than that for this one.

Here’s the basic states for Kaboom!:

  • Bombs on screen:  Track the lowest bomb.
  • No Bombs on Screen: Press button.
  • No Buckets on Screen:  Hit Game Reset switch.

At this time, it won’t be able to hit the Game Reset switch itself, so it should probably put a notice on the screen for human intervention.  ((And I don’t want to give it the ability to hit the reset switch itself.  You teach it how to do that and you never know what it’s capable of.))  For the “No Bombs on Screen” state, it’s going to have to hit the button, then wait before trying to hit the button again, otherwise it’ll hit the button a second time, before the bombs start to fall, and it’ll lose precious reaction time.  When the bombs are on the screen, it’s going to have to be careful and make sure that it doesn’t get fooled by blips where the lowest bomb disappears for a frame.  However, it can’t be too careful, since it’s going to have to move to the next spot as soon as it catches a bomb.

For now, I’ll get the “Bombs on Screen” state going, since that’s the one that’s important.  I can hit the button myself for now, as long as I’m careful to keep my fingers clear of the spinning bricks.

February 28, 2010   No Comments

The Game Part 1: Visualization

That’s the game.  Those are the elements I need to recognize and react to.  I’m going to need to find the bombs, find the buckets, and find the bomber.

I could try to do this in pretty much the same way that I did for Pong, but that was a bit hacky and prone to failure.  I’d like to learn a bit more about some of the object detection and recognition features in OpenCV, and see if there’s some way for it to identify the components directly, instead of just assuming that different boxes are different parts of the playing field.  I played with some of that on the Wesley Crusher thing, but that was using ready-made classifiers.  This time I’ll have to do it from scratch and see what happens.

At any rate, the game playing logic should be easier this time around.  I don’t have to deal with bouncing balls and linear regression and all of that.  The bomb tracking should be failry straightforward.

Anyway, time to break out the OpenCV.

February 27, 2010   No Comments

PaddleBots