The military is funding the development of binoculars that will activate an automated threat detection and tracking system by measuring brain wave response in soldiers. Theoretically, soldiers in dangerous situations would cue the HUD (Heads Up Display) to scan the entire area for potential enemy threats, highlighting them for the soldier’s attention.
Here is an artist’s wild interpretation of what one of these HUDs might look like if the soldier happened to find himself floating in space…
Solar power has seen a resurgence in research, and being my favorite power generation method, I’ve been following it closely. A basic (photovoltaic) solar power array is made up of several silicon solar panels that convert photons directly into electricity. The quality of a solar cell is usually based upon its efficiency, which measures how much absorbed sunlight is converted into electricity. Traditional solar cells have an efficiency of 12% to 18% and are fairly expensive. A residential solar installation might cost about 6 dollars per watt. The advantage is, after the installation is complete, every drop of energy from the system is free, possibly eliminating power bills forever. In some states, power companies pay customers for excess energy fed back into the grid. One caveat, during the night power must come either from batteries used to store excess energy during the day, or from the local power grid. In order to make solar a more viable alternative form of energy, efficiencies need to go up and cost needs to go down.
Thankfully, there have been a plethora of recent developments in solar technology:
Solar cells shaped like Popcorn balls and made of pigment zinc oxide grains show a 6.2% efficiency, continued progress could render traditional solar cells obsolete. Link
Extremely cheap nanowires may soon match traditional solar cell efficiencies, combining power with much more affordable production costs. Link
Printable solar panels dramatically reduce costs using the technology in your inkjet printer. They also allow solar cells to be produced outside a clean room environment and on virtually any surface. Link
Extensive solar arrays (280 megawatts) are going up in Arizona by 2011. Link
And, my very favorite concept, the space-based solar array. Various groups, including the Pentagon, have considered solving the world’s energy needs using a truly massive solar array orbiting the Earth. Power could be continually beamed down from the array in the form of microwaves or lasers. Advantages would include 24/7 solar input (no night), access to power in remote regions of the world, complete energy independence, and zero pollution/carbon emissions. Also, provided I get my hands on the controls, a giant ion/beam/laser cannon. And we all know how awesome that would be, right?
I recently read an article on modern A.I. development and integration. This comes at an opportune time, as I continue trying to develop my own A.I. for various systems (i.e. games). One of the hardest pieces of code to implement in any software is teachable A.I. Normally, A.I. is static and responds to any given situation using predefined, hard-coded rules. While it is possible to create a very believable and intricate service or opponent (if it is a game), in the end a user can identify the patterns and rules the A.I. follows in response to given stimuli. With a static A.I. the more time you spend hard coding possible situations, the more believable the response, because it has been more finely tuned to possible given stimuli. There is nothing wrong with this approach, but it obviously has its limitations.
Maybe the ultimate Artificial Intelligence will be built upon familiar Human Intelligence.
Modern A.I. is moving toward a heuristic approach. Heuristics (as defined by Merriam Webster, of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of feedback) to improve performance) takes a learning approach. Instead of hard coding possible inputs and outputs, the A.I. is given a few ground rules to follow and also the ability to form new sub-rules based on input. This allows the A.I. to ‘learn’ on a basic level. I find it interesting that our most advanced A.I. functions the same as a human infant or toddler. Humans have found that in order to create true artificial intelligence, we must start treating our advances as human intelligences.
There are still limitations in modern heuristics, there must be a few basic hard-coded ground rules for the system to work, and often the range of learning is determined by the amount of time a human has spent on the scope of the “learning code.” One step back has taken us from hard-coded response to hard-coded learning, and as a result A.I. has grown in capability significantly, but what will it take for the next leap? Another step back, maybe? What is it about humans that allows us to learn and grow in intelligence? I like to think it has a little something to do with the way our brains are wired. It’s all about connection, synapse in our own brains. How do you allow an A.I. to make its own connections in this way?
While I think about that, share your ideas, and maybe we can put together Skynet or some Cylons.