Cognitive Algorithms — What can computers learn from humans?

Ricky Ma
4 min readOct 4, 2020

--

Photo by Robynne Hu on Unsplash

Although humans have limited cognitive resources, this forces us to solve a variety of complex problems much more efficiently than computers. The efficient cognitive strategies found in humans can be reverse-engineered using resource-rational analysis to find the optimal algorithms that model our behaviour (Griffiths, 2015). Cognitive scientists can use this technique to discover algorithms that may be fundamental to who we are, while the analysis and modeling of these algorithms could lead to promising solutions to problems in computer science.

This report proposes how cognitive algorithms like selective attention, meta-planning, and iterative spontaneous-deliberate thinking can further research in computer vision, automated planning and scheduling, and artificial creativity.

Attention-optimized computer vision

Photo by Daniil Kuželev on Unsplash

In humans, our field of vision is limited, as is our capacity to process complex visual scenes. To combat this limitation, selective attention allows us to filter the world so that we attend to what’s important and ignore what isn’t. Current visual search algorithms used for computer vision apply a brute force approach by going pixel by pixel until an object of interest is found. This is computationally inefficient and unintelligent. One of the five factors that guide our selective attention is scene structuring and reasoning (Wolfe, 2017). This cognitive algorithm allows our attention to be focused on areas likely to contain targets, since not every area in our field of vision will contain an object we are looking for. Using this concept, visual search algorithms in CV can be optimized by creating low-resolution representations of a scene, with the resolutions of areas increased only if it is likely to contain a target (i.e. when looking for actors on a stage, dim areas should be ignored, while bright areas should be attended to). Thus, a visual search algorithm that incorporates selective attention can be much more efficient than current methods.

Meta-planning strategies for A.I.

Photo by Curtis MacNewton on Unsplash

High-level planning and executive function algorithms are currently a uniquely human trait. They play a large role in prediction, imagination, and decision-making (Miller, 2001). Activities like entrepreneurship with detailed future scenario envisioning and planning are beyond what computers can currently do. Planning is hard — it requires thinking about future consequences while consuming limited computational and cognitive resources. Hence, people “plan their plans” (Ho, 2020). For example, suppose you have an app idea and want to reach 1 million users. You might reason, “I will first hire a CTO. Then, the CTO can help hire junior developers to make the app.” Although a seemingly straightforward plan, future details are left out: What do the developers do? How do we get funding? Clearly, rather than thinking about a step-by-step path from startup to a billion-dollar company, you might plan to later think about getting funding once you have developers. Human meta-planning provides insight into how computer scientists can optimize their algorithms. Plans are constantly changing and contain different details at different times. Detailed plans are computationally costly to maintain. Thus, efficient planning algorithms should plan their plans: take into account what details to include in a plan, and when.

Artificial creativity and artificial intelligence

Photo by Jr Korpa on Unsplash

Creativity is another human trait in which the brain is still dominant. Current machine learning algorithms can create music in the style of Bach (Gaëtan, 2017) and even generate stories and news articles indistinguishable from human authors (Brown, 2020). These tasks, however, are considered “small-c” creativity, as they are trained on data that already exists, and the new material they come up with is really just a combination of their existing knowledge. Computers lack “Big-C” creativity — the ability to produce new ideas, creations, or theories beyond the current paradigms. Recent studies have revealed human creativity to be a complex interplay between spontaneous and deliberate thinking — a process involving the cooperation of the default and executive control brain networks (Beaty, 2016). One must first spontaneously generate ideas, and then deliberately evaluate them to decide which ideas are good. This means that, in terms of computational models, a cognitive algorithm for creativity must be a mix of probabilistic and deterministic models. It should involve iterations of random idea generation interlaced with systematic selections of “good” ideas. What results from these types of models may bring us closer to the “Big-C” creativity found in humans.

Looking ahead

The three cognitive computer science topics discussed are interesting areas in which computers may be able to emulate humans. Computers, however, are still incomparable to humans in traits like emotions, empathy, and consciousness. While these may be impossible to imitate perfectly, computer scientists can benefit greatly through discovering, studying, and modeling the cognitive algorithms we find in ourselves.

--

--

Ricky Ma

Ideas about (the future of) A.I., machine learning, cognitive science, philosophy of mind, and more. https://ricky-ma.github.io/