New Memory Research Teases 100x Density Jump, Merged Compute and Memory

A 10 to 100 times storage density jump? We'll take that as soon as possible, please.

synapse

New research along the frontiers of materials engineering holds promise for a truly astounding performance improvement for computing devices. A research team helmed by Markus Hellbrand et al. and associated with the University of Cambridge believes the new material, based of hafnium oxide layers tunneled by voltage-changing barium spikes, fuses the properties of memory and processing-bound materials. That means the devices could work for data storage, offering anywhere from 10 to 100 times the density of existing storage mediums, or it could be used as a processing unit. 

Published in the Science Advances journal , the research gives us a road through which we might end with far greater density, performance and energy efficiency in our computing devices. So much so, in fact, that a typical USB stick based on the technology (which is called continuous range ) could hold between 10 and 100 times more information than the ones we currently use.

With RAM doubling in density every four years, as pointed out by JEDEC , it'd take RAM makers decades to eventually achieve the same level of density as this technology has shown today.

The device is also a light at the tunnel of neuromorphic computing. Like the neurons in our brain, the material (known as a resistive switching memory ) holds the promise of working as both a storage and processing medium. That's something that simply doesn't happen in our current semiconductor technology: the transistor and materials design arrangements are so different between what you need for a memory cell and what you need for a processing one (mainly in terms of endurance, as in, the ability not to suffer performance degradations) that there's currently no way to merge them.

This inability to merge them means that information must be continuously flowing between the processing system and its various caches ( when thinking of a modern CPU ), as well as its external memory pool (looking at you, best DDR5 kits on the market ). in computing, this is known as von Neumann's bottleneck, meaning that a system with separate memory and processing capabilities will be fundamentally limited by the bandwidth between them both (what's usually known as the bus). This is why all semiconductor design companies (from Intel through AMD, Nvidia, and many others) design dedicated hardware that accelerates this exchange of information, such as Infinity Fabric and NVLink.

The problem is that this exchange of information has an energy cost, and this energy cost is currently limiting the upper bounds of achievable performance. Remember that when energy circulates, there are also inherent losses, which result in increased power consumption (a current hard limit on our hardware designs and a growing priority in semiconductor design) as well as heat — yet another hard limit that's led to the development of increasingly exotic cooling solutions to try and allow Moore's law to limp ahead for a while yet. Of course, there's also the sustainability factor: it's expected that computing will consume as much as 30% of the worldwide energy needs in the not-so-distant future.

“To a large extent, this explosion in energy demands is due to shortcomings of current computer memory technologies,” said first author Dr. Markus Hellenbrand, from Cambridge’s Department of Materials Science and Metallurgy. “In conventional computing, there’s memory on one side and processing on the other, and data is shuffled back between the two, which takes both energy and time.”

Stay on the Cutting Edge

Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.

The benefits of merging both memory and processing are quite spectacular, as you might imagine. While conventional memory is capable of just two states (one or zero, the cause for the "binary" nomenclature), a resistive switching memory device can change its resistance through a range of states. This allows it to function at increased varieties of voltages, which in turn allows for more information to be encoded. At a high enough level, this is much the same process happening in the NAND realm, with increases in bits per cell corresponding to a higher number of possible voltage states unlocked in the memory cell's design.

One way to differentiate processing from storing is saying that processing means that the information is undergoing writes and rewrites (additions or subtractions, transformations or reorganizations) as fast as its switching cycle is requested to. Storing means that the information needs to be static for a longer period of time — perhaps because it's part of the Windows or Linux kernels, for instance.

To build these synapse devices , as the paper refers to them, the research team had to find a way to deal with a materials engineering bottleneck known as the uniformity problem. Because hafnium oxide (HfO2) doesn't possess any structure at the atomic level, the hafnium and oxygen atoms that can make or break its insulating properties are deposited haphazardly. This limits its application for conducting electrons (electrical power); the more ordered the atomic structure is, the least resistance will be caused, so the higher the speed and efficiency. But the team found that depositing barium (Ba) within the thin films of unstructured hafnium oxide resulted in highly-ordered barium bridges (or spikes). And because their atoms are more structured, these bridges can better allow the flow of electrons.

Electron imaging

But the fun began when the research team found they could dynamically change the height of the barium spikes, allowing for fine-grained control of their electrical conductivity. They found that the spikes could offer switching capabilities at a rate of ~20ns, meaning that they could change their voltage state (and thus hold different information) within that window. They found switching endurances of >10^4 cycles, with a memory window >10. This means that while the material is fast, the maximum number of voltage state changes it can currently withstand stands at around 10,000 cycles - not a terrible result, but not an amazing one.

It's equivalent to the endurance available with MLC (Multi-Level Cell) technology, which will naturally limit its application - the usage of this material as a processing medium (where voltage states are rapidly changed in order to keep a store of calculations and their intermediate results).

Doing some rough napkin math, the ~20 ns switching leads to an operating frequency of 50 MHz ( converting to cycles per nanosecond ). With the system processing different states at full speed (working as a GPU or CPU, for instance), that means the barium bridges would cease functioning (hit their endurance limit) at around the 0,002-second mark (remember, it's only operating at 50 MHz). That doesn't seem like it could be performant enough for a processing unit.

But for storage? Well, that's where the USB stick that's "10 to 100 times denser" in terms of memory capacity comes in. These synapse devices can access a lot more intermediate voltage states than even the densest NAND technology in today's roomiest USB sticks can - by a factor of 10 or 100.

Who wouldn't love to have a 10 TeraByte or even 100 TeraByte "USB 7" stick on their hands?

There's some work to be done in terms of endurance and switching speed of the barium bridges, but it seems like the design is already an enticing proof of concept. Better yet, the semiconductor industry already works with hafnium oxide, so there are fewer tooling and logistics nightmares to fight through.

But here's a particularly ingenious product possibility: imagine that the technology improves to the point that it's fabricated and useable to design an AMD or Nvidia GPU (which these days operate at around the 2 GHz mark). There's a world where that graphics card comes with a reset factory state where it's entirely operating as memory (now imagine a graphics card with 10 TB of it, the same as our hypothetical USB stick).

Imagine a world where what AMD and Nvidia offered were essentially programmable GPUs, with continuous range-based GPU dies product-stacked in terms of maximum storage capability (remember the 10 to 100 denser than current USB). If you are an AI aficionado attempting to build your own Large Language Model (LLM), you can program your GPU so that just the right amount of these synthetic devices, these neuromorphic transistors, runs processing functions — there's no telling how many trillion parameters models will eventually end up as their complexity increases, so memory will grow increasingly more important.

Being able to dictate whether the transistors in your graphics card are used exactly as memory or exactly as eye-candy-amplifiers to turn graphics settings up to eleven, that'd be entirely up to the end-user; from casual gamer to High Performance Computing (HPC) installer. Even if that meant a measured decay in the longevity of parts of our chip.

We're always upgrading them anyway, aren't we?

But let's not get ahead of ourselves. Even though this isn't as dangerous an issue as AI development and its regulation, there's little to be gained in dreaming so far ahead. Like all technology, it'll come - when it's ready. if it ever is.

Francisco Pires

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

Western Digital confirms HDD and NAND flash shortages, warns partners of higher pricing

Seagate BarraCuda 530 SSD breaks cover with 7,400 MB/s speeds — Still PCIe 4.0, but looks like a significant upgrade over the previous 520

Intel crams Meteor Lake laptop chips into a socket for edge computing — includes Arc graphics and NPU for AI workloads

  • jeremyj_83 I'd take a cheap way to have 64GB or more RAM in my computer. It would be even more amazing to have cheap 1TB DIMMs in the server world. Reply
  • Kamen Rider Blade Given it's limited switching endurances of >10^4 cycles = 10,000 cycles. It's best used as a replacement for NAND Flash, can it get to the cheap costs that NAND Flash is currently at? How long can it hold data in a off-line state? Many people would be happy with 10,000 cycles Endurance given how abysmal QLC is right now. Reply
Kamen Rider Blade said: Given it's limited switching endurances of >10^4 cycles = 10,000 cycles. It's best used as a replacement for NAND Flash, can it get to the cheap costs that NAND Flash is currently at? How long can it hold data in a off-line state? Many people would be happy with 10,000 cycles Endurance given how abysmal QLC is right now.
jeremyj_83 said: Even with 1k cycles that is still plenty for even data center drives. With the increase in storage amount you get a big increase in endurance. The Solidigm 7.68TB QLC drive has a 5.9PB endurance. https://www.servethehome.com/solidigm-has-a-61-44tb-ssd-coming-this-quarter/
  • usertests With a 100x density increase and a bonus recovery of write endurance, you could talk about maxing out the SDUC/microSDUC standard (128 TB/TiB). Reply
  • gg83 It's the compute on memory that I'm most excited about. Merging the two is for sure the future. How much cache is being slapped on top on AMD chips now? Might as well build a tech that combines the two right? But this seems to be an "either-or" process/memory tech huh? Reply
gg83 said: It's the compute on memory that I'm most excited about. Merging the two is for sure the future. How much cache is being slapped on top on AMD chips now? Might as well build a tech that combines the two right? But this seems to be an "either-or" process/memory tech huh?
Kamen Rider Blade said: Imagine how much nicer it would be at 10k cycles, it'd be like the old days of SLC/MLC NAND flash, but with much better bit density.
jeremyj_83 said: Tell me on the desktop will you notice any difference from 10PB or write endurance vs 1PB? No. These new QLC drives probably have more write endurance than the 80GB old SLC drives from 2010 even with fewer write cycles.
  • View All 17 Comments

Most Popular

By Roshan Ashraf Shaikh April 09, 2024

By Aaron Klotz April 09, 2024

By Denise Bertacchi April 09, 2024

By Anton Shilov April 08, 2024

By Christopher Harper April 08, 2024

By Ash Hill April 08, 2024

By Zhiye Liu April 08, 2024

By Christopher Harper April 07, 2024

computer memory research articles

A Study on Modeling and Optimization of Memory Systems

  • Regular Paper
  • Published: 30 January 2021
  • Volume 36 , pages 71–89, ( 2021 )

Cite this article

  • Jason Liu 1 ,
  • Pedro Espina 1 &
  • Xian-He Sun 2  

312 Accesses

6 Citations

Explore all metrics

Accesses Per Cycle (APC), Concurrent Average Memory Access Time (C-AMAT), and Layered Performance Matching (LPM) are three memory performance models that consider both data locality and memory assess concurrency. The APC model measures the throughput of a memory architecture and therefore reflects the quality of service (QoS) of a memory system. The C-AMAT model provides a recursive expression for the memory access delay and therefore can be used for identifying the potential bottlenecks in a memory hierarchy. The LPM method transforms a global memory system optimization into localized optimizations at each memory layer by matching the data access demands of the applications with the underlying memory system design. These three models have been proposed separately through prior efforts. This paper reexamines the three models under one coherent mathematical framework. More specifically, we present a new memory- centric view of data accesses. We divide the memory cycles at each memory layer into four distinct categories and use them to recursively define the memory access latency and concurrency along the memory hierarchy. This new perspective offers new insights with a clear formulation of the memory performance considering both locality and concurrency. Consequently, the performance model can be easily understood and applied in engineering practices. As such, the memory-centric approach helps establish a unified mathematical foundation for model-driven performance analysis and optimization of contemporary and future memory systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

computer memory research articles

Analyzing the impact of various parameters on job scheduling in the Google cluster dataset

Danyal Shahmirzadi, Navid Khaledian & Amir Masoud Rahmani

computer memory research articles

Breaking the von Neumann bottleneck: architecture-level processing-in-memory technology

Xingqi Zou, Sheng Xu, … Yinhe Han

computer memory research articles

A Modern Primer on Processing in Memory

Wulf W A, McKee S A. Hitting the memory wall: Implications of the obvious. ACM SIGARCH Computer Architecture News , 1995, 23(1): 20-24. https://doi.org/10.1145/216585.216588 .

Article   Google Scholar  

Denning P J. The working set model for program behavior. In Proc. the 1st ACM Symposium on Operating System Principles , October 1967, Article No. 15. https://doi.org/10.1145/357980.357997 .

Denning P J. The locality principle. In Communication Networks and Computer Systems: A Tribute to Professor Erol Gelenbe , Barria G A (ed.), London, Imperial College Press, 2006, pp.43-67.

Chou Y, Fahs B, Abraham S G. Microarchitecture optimizations for exploiting memory-level parallelism. In Proc. the 31st Annual International Symposium on Computer Architecture , June 2004, pp.76-87. https://doi.org/10.1109/ISCA.2004.1310765 .

Sun X H, Wang D W. Concurrent average memory access time. Computer , 2014, 47(5): 74-80. https://doi.org/10.1109/MC.2013.227 .

Wang D W, Sun X H. APC: A novel memory metric and measurement methodology for modern memory systems. IEEE Transactions on Computers , 2014, 63(7): 1626-1639. https://doi.org/10.1109/TC.2013.38 .

Article   MathSciNet   MATH   Google Scholar  

Liu Y, Sun X. LPM: A systematic methodology for concurrent data access pattern optimization from a matching perspective. IEEE Transactions on Parallel and Distributed Systems , 2019, 30(11): 2478-2493. https://doi.org/10.1109/TPDS.2019.2912573 .

Hennessy J L, Patterson D A. Computer Architecture: A Quantitative Approach (5th edition). Morgan Kaufmann, 2011.

Tuck J, Ceze L, Torrellas J. Scalable cache miss handling for high memory-level parallelism. In Proc. the 39th Annual IEEE/ACM International Symposium on Microarchitecture , December 2006, pp.409-422. https://doi.org/10.1109/MICRO.2006.44 .

Lim K, Turner Y, Santos J R, AuYoung A, Chang J, Ranganathan P, Wenisch T F. System-level implications of disaggregated memory. In Proc. the 2012 IEEE International Symposium on High-Performance Comp Architecture , Feb. 2012, pp.189-200. https://doi.org/10.1109/HPCA.2012.6168955 .

Gao P X, Narayan A, Karandikar S, Carreira J, Han S, Agarwal R, Ratnasamy S, Shenker S. Network requirements for resource disaggregation. In Proc. the 12th USENIX Symposium on Operating Systems Design and Implementation , Nov. 2016, pp.249-264. https://doi.org/10.5555/3026877.3026897 .

Zhang N, Toonen B, Sun X H, Allcock B. Performance modeling and evaluation of a production disaggregated memory system. In Proc. the 2020 International Symposium on Memory Systems , Sept. 28–Oct. 2. 2020.

Zhang N, Jiang C T, Sun X H, Song S. Evaluating GPGPU memory performance through the C-AMAT model. In Proc. the Workshop on Memory Centric Programming for HPC , Nov. 2017, pp.35-39. https://doi.org/10.1145/3145617.3158214 .

Sun X H, Ni L M. Another view on parallel speedup. In Proc. the 1990 ACM/IEEE Conference on Supercomputing , November 1990, pp.324-333. https://doi.org/10.1109/SU-PERC.1990.130037 .

Mattson R L, Gecsei J, Slutz D R, Traiger I L. Evaluation techniques for storage hierarchies. IBM Systems Journal , 1970, 9(2): 78-117. https://doi.org/10.1147/sj.92.0078 .

Article   MATH   Google Scholar  

Weinberg J, McCracken M O, Strohmaier E, Snavely A. Quantifying locality in the memory access patterns of HPC applications. In Proc. the 2005 ACM/IEEE Conference on Supercomputing , November 2005, Article No. 50. https://doi.org/10.1109/SC.2005.59 .

Berg E, Hagersten E. Fast data-locality profiling of native execution. In Proc. the International Conference on Measurements and Modeling of Computer Systems , June 2005, pp.169-180. https://doi.org/10.1145/1071690.1064232 .

Gu X M, Christopher I, Bai T X, Zhang C L, Ding C. A component model of spatial locality. In Proc. the 8th International Symposium on Memory Management , June 2009, pp.99-108. https://doi.org/10.1145/1542431.1542446 .

Anghel A, Dittmann G, Jongerius R, Luijten R. Spatiotemporal locality characterization. In Proc. the 1st Workshop on Near Data Processing , December 2013.

Ding C, Xiang X Y. A higher order theory of locality. In Proc. the 2012 ACM SIGPLAN Workshop on Memory System Performance Correctness , June 2012, pp.68-69. https://doi.org/10.1145/2247684.2247697 .

Ding C, Zhong Y T. Predicting whole-program locality through reuse distance analysis. In Proc. the 2003 ACM SIGPLAN Conference on Programming Language Design and Implementation , June 2003, pp.245-257. https://doi.org/10.1145/781131.781159 .

Jiang Y L, Zhang E Z, Tian K, Shen X P. Is reuse distance applicable to data locality analysis on chip multiprocessors? In Proc. the 19th International Conference on Compiler Construction , March 2010, pp.264-282. https://doi.org/10.1007/978-3-642-11970-5_15 .

Gupta S, Xiang P, Yang Y, Zhou H Y. Locality principle revisited: A probability-based quantitative approach. Journal of Parallel and Distributed Computing , 2013, 73(7): 1011-1027. https://doi.org/10.1016/j.jpdc.2013.01.010 .

Liu Y H, Sun X H. CaL: Extending data locality to consider concurrency for performance optimization. IEEE Transactions on Big Data , 2017, 4(2): 273-288. https://doi.org/10.1109/TB-DATA.2017.2753825 .

Glew A. MLP yes! ILP no. In Proc. the ASPLOS Wild and Crazy Idea Session , October 1998.

Sorin D J, Pai V S, Adve S, Vernon M K, Wood D A. Analytic evaluation of shared-memory systems with ILP processors. In Proc. the 25th Annual International Symposium on Computer Architecture , June 1998, pp.380-391. https://doi.org/10.1109/ISCA.1998.694797 .

Gray J, Shenoy P. Rules of thumb in data engineering. In Proc. the 16th International Conference on Data Engineering , March 2000, pp.3-10. https://doi.org/10.1109/ICDE.2000.839382 .

Williams S, Waterman A, Patterson D. Roofline: An insightful visual performance model for multicore architectures. Commun. ACM , 2009, 52(4): 65-76. https://doi.org/10.1145/1498765.1498785 .

Zhu MF, Xiao L M, Ruan L, Hao Q F. DeepComp: Towards a balanced system design for high performance computer systems. Front. Comput. Sci. China , 2010, 4(4): 475-479. https://doi.org/10.1007/s11704-010-0150-z .

Download references

Acknowledgments

The authors would like to thank the reviewers for their constructive comments and suggestions.

Author information

Authors and affiliations.

School of Computing and Information Sciences, Florida International University, Miami, FL, 33199, USA

Jason Liu & Pedro Espina

Department of Computer Science, Illinois Institute of Technology, Chicago, IL, 60616, USA

Xian-He Sun

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jason Liu .

Supplementary Information

(PDF 173 kb)

Rights and permissions

Reprints and permissions

About this article

Liu, J., Espina, P. & Sun, XH. A Study on Modeling and Optimization of Memory Systems. J. Comput. Sci. Technol. 36 , 71–89 (2021). https://doi.org/10.1007/s11390-021-0771-8

Download citation

Received : 02 July 2020

Accepted : 19 November 2020

Published : 30 January 2021

Issue Date : January 2021

DOI : https://doi.org/10.1007/s11390-021-0771-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • performance modeling
  • performance optimization
  • memory architecture
  • memory hierarchy
  • concurrent average memory access time
  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 24 September 2019

Focus on learning and memory

Nature Neuroscience volume  22 ,  page 1535 ( 2019 ) Cite this article

16k Accesses

3 Citations

24 Altmetric

Metrics details

In this special issue of Nature Neuroscience , we feature an assortment of reviews and perspectives that explore the topic of learning and memory.

Learning new information and skills, storing this knowledge, and retrieving, modifying or forgetting these memories over time are critical for flexibly responding to a changing environment. How these processes occur has fascinated philosophers, psychologists, and neuroscientists for generations, and the question continues to inspire research encompassing diverse approaches. In this special issue, Nature Neuroscience presents a collection of reviews and perspectives that reflects the breadth and vibrancy of this field. Many of these pieces touch on topics that have animated decades of investigation, including the roles of synaptic plasticity, adult neurogenesis, neuromodulation, and sleep in learning and memory. Yet recently developed technologies continue to provide novel insights in these areas, leading to the updated views presented here.

Synaptic plasticity, such as long-term potentiation and depression, remains the prevailing cellular model for learning and memory. While many presume that these processes are engaged by learning and mediate lasting changes in behavior, this link has yet to be conclusively demonstrated in vivo. Humeau and Choquet ( https://doi.org/10.1038/s41593-019-0480-6 ) outline the latest tools that can be used to visualize and manipulate synaptic activity and signaling in behaving animals, and they discuss further advances that are needed to help bridge this gap in our understanding.

Neuroscientists have also long been intrigued by the role that the formation of new neurons could play in memory formation and maintenance of new memories. Miller and Sahay ( https://doi.org/10.1038/s41593-019-0484-2 ) integrate recent research on adult hippocampal neurogenesis to present a model of how the maturation of adult-born dentate granule cells contributes to memory indexing and interference.

While the neural mechanisms underlying memory acquisition and consolidation are relatively well-described, less is known about how memories are retrieved. Frankland, Josselyn, and Köhler ( https://doi.org/10.1038/s41593-019-0493-1 ) discuss how recent approaches that enable the manipulation of memory-encoding neural ensembles (termed ‘engrams’) have informed our current understanding of retrieval. They highlight the ways in which retrieval success is influenced by retrieval cues and the congruence between encoding and retrieval states. They also discuss important open questions in the field.

External stimuli and internal states can affect various aspects of learning and memory, which is mediated in part by neuromodulatory systems. Likhtik and Johansen ( https://doi.org/10.1038/s41593-019-0503-3 ) detail how acetylcholine, noradrenaline, and dopamine systems participate in fear encoding and extinction. They discuss emergent themes, including how neuromodulation can act throughout the brain or in specifically targeted regions, how it can boost selected neural signals, and how it can tune oscillatory relationships between neural circuits.

The efficacy of memory storage is also influenced by sleep. Klinzing, Niethard, and Born ( https://doi.org/10.1038/s41593-019-0467-3 ) review evidence from rodent and human studies that implicates reactivation of memory ensembles (or ‘replay’), synaptic scaling, and oscillations during sleep in memory consolidation. They also discuss recent findings that suggest that the thalamus coordinates these processes.

Effective learning requires us to identify critical information and ignore extraneous details, all of which varies depending on the task at hand. Yael Niv ( https://doi.org/10.1038/s41593-019-0470-8 ) discusses computational and neural processes involved in the formation of such task representations, how factors such as attention and context affect these representations, and how we use task representations to make decisions.

The ability to issue appropriate outputs in response to neural activity is a critical brain function, and is often disrupted in injury and disease. Maryam Shanechi ( https://doi.org/10.1038/s41593-019-0488-y ) discusses how ‘closed-loop’ brain–machine interfaces (BMIs) have been used to monitor motor impulses and in turn control prosthetic or paralyzed limbs in order to restore function. Furthermore, she discusses how manipulation of BMI parameters can aid the study of learning. Finally, she explores how BMIs could be used in a similar vein to monitor and correct aberrant mood processes in psychiatric disorders.

By highlighting the topic of learning and memory, we honor its importance and centrality in neuroscience, while also celebrating the ways that other disciplines, including psychology, cellular and molecular biology, computer science, and engineering fuel insights in this area. We hope to continue to publish outstanding research in this area, particularly studies that resolve long-standing questions, that develop or leverage new methodologies, and that integrate multiple approaches.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Focus on learning and memory. Nat Neurosci 22 , 1535 (2019). https://doi.org/10.1038/s41593-019-0509-x

Download citation

Published : 24 September 2019

Issue Date : October 2019

DOI : https://doi.org/10.1038/s41593-019-0509-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

computer memory research articles

Frontiers for Young Minds

Frontiers for Young Minds

  • Download PDF

How Scientists Use Webcams to Track Human Gaze

computer memory research articles

Eye tracking is a technology that can record people’s eye movements and tell scientists what people look at on screens or out in the world. Scientists use eye tracking to understand what people notice or remember; marketing researchers who create ads use eye tracking to see what type of ads or products capture people’s attention; and video game designers use eye tracking to see what parts of a game are confusing to players, so designers can fix the game. Eye-tracking equipment can be expensive and time consuming for researchers to use, so is there another way to record eye movements without buying an eye tracker? There is! Computer scientists can use a computer-based method called machine learning to turn an everyday webcam into an eye tracker. They can even do this with mobile phones! In this article, you will learn about how eye trackers work and the advantages of disadvantages of using webcams to track eyes.

Eyes are Windows to the Mind

Have you ever had a conversation with a friend and noticed your friend’s eyes were no longer looking at you but were suddenly looking behind you? What did you do? You probably turned around to see what your friend was looking at. This illustrates that eye movements tell us where people are paying attention. Scientists measure eye movements to understand what people remember and pay attention to, how people read, and even to screen for certain disorders. An eye tracker is a camera that takes pictures of a person’s eyes [ 1 ]. Eye trackers study information from these pictures (like the shape of the pupils) to pinpoint where a person is looking. These cameras take hundreds or even thousands of pictures each second! The large number of eye pictures allows eye trackers to be very exact in pinpointing where and when a person looks at something.

If an eye tracker was recording your eye movements while you watched a video, a scientist could use your eye movements to understand what you were paying attention to on the screen and for how long. For example, an eye tracker could detect your fixations : when your eyes seem like they have stopped moving to look at something. Longer fixations (like when you stare at something) might mean that you are really focused on a character in the video, while shorter and frequent fixations may mean you are either distracted by some other characters or objects, or that you are having trouble understanding what is happening on the screen. The tracker may also detect that your eyes follow the movement of the characters without you even noticing ( Figure 1 ). The large, sweeping movements that your eyes make between fixations are called saccades (for more information about eye movements, see this Frontiers for Young Minds article ).

Figure 1 - A tablet shows a video with eye scan paths on it.

  • Figure 1 - A tablet shows a video with eye scan paths on it.
  • A scan path refers to the path that the eyes take when a person is looking at something. The large circles represent fixations, where the person’s eyes seem to stop, and the lines show the saccades that the person’s eyes took between fixations. What parts of this video did the person look at?

Teaching Computers to Predict Gaze Location

In the lab, scientists use special eye-tracking equipment that is extremely good at figuring out where a person’s eyes are looking on a screen, which is called gaze location ( Figure 2 ). Even though eye trackers are excellent tools, they have some challenges. First, eye-tracking equipment can be very expensive, so not every scientist who wants to research eye movements can purchase the equipment for their laboratory. Also, eye trackers can only measure eye movements in-person and with one person at a time. This means research that requires lots of people can take a long time to conduct. It can be challenging to find people to participate in research when participants have to go to a laboratory to do so.

Figure 2 - (A) A participant works on a computer with an eye-tracking system.

  • Figure 2 - (A) A participant works on a computer with an eye-tracking system.
  • The eye-tracking system uses a lot of technical equipment and requires the participant to keep her head still on a chin rest. All of this equipment makes the system very accurate in figuring out where the participant is looking on the computer screen. (B) A person works on a laptop with a built-in webcam. The webcam does not require as much equipment and the participants can sit comfortably and is free to move her head.

These challenges in using eye-tracking equipment can be overcome by using webcams to track eyes. Webcams are in most common personal devices (like phones or laptops), making it easy for scientists to reach a diverse group of people, without participants needing to come to a lab. Webcams are also much less expensive than eye-tracking equipment. Scientists could use webcams to collect eye-movement data remotely, which could save time and money [ 2 ]. Webcams were not designed to track eyes, so how do scientists get eye-movement data from them? There are several ways to use webcams as eye trackers, but one popular way is with machine learning [ 3 ].

Machine learning is a way for computers to use data (like pictures or numbers) and a set of mathematical calculations to learn from experience and find patterns in the world. Using machine learning, computers can learn from lots of pictures of people’s faces. When you are playing with your friends, have you ever noticed where they were looking, like at a cool toy or a yummy snack? You use clues to figure out where your friend looking, like their eye movements, how their head is turned, or how close they are to something. Computers can do something similar. They look at thousands of pictures of people’s faces and try to find patterns in those pictures, just like your brain finds patterns in your friends’ actions. Computers use these patterns to guess where someone might be looking when they look at a face, for instance. Scientists have improved machine learning to make more accurate predictions of where a person is looking by using other helpful information like eye and face landmarks that point out edges on a face ( Figure 3 ); depth information, like how far away a person is from the webcam; and even information from the scene on the screen [ 4 ].

Figure 3 - Webcam images with facial landmarks.

  • Figure 3 - Webcam images with facial landmarks.
  • The dots (landmarks) on this woman’s face are on important edges and corners of the face, such her jaw, mouth, eyebrows, and importantly, her eyes. Machine learning can use landmarks to make better gaze-location predictions from webcam images like these.

Challenges with Webcam Eye Tracking

Though webcam eye tracking can help scientists make conclusions about peoples’ gaze locations for little cost, it is far from perfect. Webcam eye tracking does not have great precision or accuracy in saying where the eyes are really looking. Compared to a laboratory eye tracker, webcam eye tracking is not very good at separating types of eye movements from each other. This is because the pictures taken on a webcam are of lower quality than those on a laboratory tracker. Also, the frame rates (how quickly cameras can take pictures) are very different. A webcam can take around 30 pictures per second. While that may seem like a lot, laboratory eye trackers can take hundreds or even thousands of images per second! Taking fewer pictures per second means that the webcam cannot capture certain types of eye movements that happen very quickly.

Scientists can use webcams to track the general pattern of eye movements, but the measurements are not exact for finer eye movements. When someone wants to track eye movements to large characters and scenes in a video or an ad, low precision might not be a big deal. However, when scientists are doing experiments, they need better precision for tracking small or fast eye movements, like those eye movements that happen during reading or searching for small objects in a scene. For instance, say that you are focused on a person talking in a video, then you move your gaze to see an animal moving in the background just behind the person, and then you shift your eyes back to the person talking. Those small shifts in gaze may not be detected in webcam eye tracking. Also, think about where and how you normally watch videos, browse the internet, or use a camera. Are you in the dark, and maybe sometimes moving around? Because webcams have lower image quality compared to laboratory eye trackers, it is ideal for people to be in rooms with good lighting and to be sitting still while tracking. It is not always possible to make sure people are doing these things while researchers collect webcam images remotely.

Looking Ahead: The Future of Eye Tracking

Webcam eye tracking can be a cost-effective and time-saving approach for researchers who want to study eye movements. However, there are limitations in using webcams for eye tracking, as they are not as accurate as laboratory eye trackers at predicting where someone is looking. Scientists are working to improve webcam eye-tracking methods, such as by using machine learning, so they can more accurately predict eye movements using images from webcams. This work is important because it helps make eye-tracking technology easy to use for everyone, allowing scientists to learn more about how we see and interact with the world around us, even from the comfort of our own homes.

Eye Tracker : ↑ Technology that can record people’s eye movements and tell scientists what participants are looking at and for how long.

Fixation : ↑ The time between large eye movements when the eyes seem like they have stopped to look at something.

Saccade : ↑ A large, sweeping movement that your eyes make between fixations.

Machine Learning : ↑ A way of analyzing data that allows computers to learn from experience.

Landmarks : ↑ Marks that help a computer understand where edges of important parts of a face are in a picture, like eye corners or the chin.

Precision : ↑ Accuracy, or the degree to which the tracking system is correct in saying where someone is looking.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Written informed consent was obtained from the individual(s) for the publication of any identifiable images or data included in this article.

[1] ↑ Robbins, A., and Hout, M. C. 2015. Look into my eyes. Sci. Am. Mind 26:54–61. doi: 10.1038/scientificamericanmind0115-54

[2] ↑ Papoutsaki, A., Laskey, J., and Huang, J. 2017. “Searchgazer: Webcam eye tracking for remote studies of web search”, in Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval (New York, NY: ACM), 17–26.

[3] ↑ Valliappan, N., Dai, N., Steinberg, E., He, J., Rogers, K., Ramachandran, V., et al. 2020. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nat. Commun. 11:4553. doi: 10.1038/s41467-020-18360-5

[4] ↑ Park, S., Aksan, E., Zhang, X., and Hilliges, O. 2020. “Towards end-to-end video-based eye-tracking”, in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16 (Berlin: Springer International Publishing), 747–63.

We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here . By continuing to use our site, you accept our use of cookies, revised Privacy Policy and Terms of Service .

Zacks Investment Research Home

New to Zacks? Get started here.

Member Sign In

Don't Know Your Password?

Zacks

  • Zacks #1 Rank
  • Zacks Industry Rank
  • Zacks Sector Rank
  • Equity Research
  • Mutual Funds
  • Mutual Fund Screener
  • ETF Screener
  • Earnings Calendar
  • Earnings Releases
  • Earnings ESP
  • Earnings ESP Filter
  • Stock Screener
  • Premium Screens
  • Basic Screens
  • Research Wizard
  • Personal Finance
  • Money Managing
  • Real Estate
  • Retirement Planning
  • Tax Information
  • My Portfolio
  • Create Portfolio
  • Style Scores
  • Testimonials
  • Zacks.com Tutorial

Services Overview

  • Zacks Ultimate
  • Zacks Investor Collection
  • Zacks Premium

Investor Services

  • ETF Investor
  • Home Run Investor
  • Income Investor
  • Stocks Under $10
  • Value Investor
  • Top 10 Stocks

Other Services

  • Method for Trading
  • Zacks Confidential

Trading Services

  • Black Box Trader
  • Counterstrike
  • Headline Trader
  • Insider Trader
  • Large-Cap Trader
  • Options Trader
  • Short Sell List
  • Surprise Trader
  • Alternative Energy

Zacks Investment Research Home

You are being directed to ZacksTrade, a division of LBMZ Securities and licensed broker-dealer. ZacksTrade and Zacks.com are separate companies. The web link between the two companies is not a solicitation or offer to invest in a particular security or type of security. ZacksTrade does not endorse or adopt any particular investment strategy, any analyst opinion/rating/report or any approach to evaluating individual securities.

If you wish to go to ZacksTrade, click OK . If you do not, click Cancel.

computer memory research articles

Image: Bigstock

Super Micro Computer (SMCI) Unveils IoT and Embedded Systems

Super Micro Computer ( SMCI Quick Quote SMCI - Free Report ) introduced new IoT and embedded systems, including SYS-E100, the SYS-E102, the SYS-E111AD and an updated SYS-E403, utilizing the latest Intel ’s ( INTC Quick Quote INTC - Free Report ) Atom and Core Central Processing Units (CPUs). In addition, Super Micro Computer launched SYS-E100-14AM and SYS-E102-14AM servers that feature Intel Atom x7000RE processors, offering low-power, high-efficiency compute performance. These ultra-compact systems offer a Small Outline Dual In-line Memory Module, 16GB DDR5 SODIMM. It also offers dual 2.5 GbE LAN ports, USB 3.2 ports, and an M.2 B/E/M-key. Further, the company unveiled the SYS-E100 fanless system, which offers a temperature range of -20°C to 70°C, reducing moving parts and dust resistance. All models work with variable power supply from 9-36V, enabling easy integration in industrial environments. SMCI is expected to gain solid traction across edge applications on the back of these new devices.

Super Micro Computer, Inc. Price and Consensus

Super Micro Computer, Inc. Price and Consensus

Super Micro Computer, Inc. price-consensus-chart | Super Micro Computer, Inc. Quote

Expanding Server and Storage Systems Portfolio

Apart from the new launches, the company introduced a diverse portfolio of infrastructure solutions for 5G and telecom workloads, using 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and NVIDIA ’s ( NVDA Quick Quote NVDA - Free Report ) Grace Hopper Superchip, enhancing performance and efficiency. This includes a high-density ARS-111GL-NHR system using the NVIDIA Grace Hopper Superchip. This compact 1U chassis features an integrated CPU, H100 GPU, NVLink interconnect, 576GB coherent memory, and 2 PCIe slots for NVIDIA BlueField-3 or ConnectX-7. It also includes The SYS-211E, an ultra-short depth 5G edge platform with a 5th Gen Intel Xeon processor, enhancing performance per watt by 36%. It allows efficient 5G network operation, reduces operating costs, and supports public telecom and private networks. Further, the company unveiled new AI systems, including NVIDIA HGX B100 8-GPU and HGX B200 8-GPU systems and 4U NVIDIA HGX B200 8-GPU liquid-cooled system, among others. These products are built upon Supermicro's proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs. Additionally, Super Micro Computer expanded its server portfolio with the addition of Supermicro X13 systems. These utilize 5th Gen Intel Xeon processors, offering enhanced security, higher core count, and increased performance, with a 36% higher average performance/watt across workloads.

Wrapping Up

All the above-mentioned endeavors will allow the company to capitalize on growth opportunities present in the server storage market. Per a Technavio report, the global server storage market is expected to grow by $87.7 billion, implying a CAGR of 27.1% during the forecast period of 2023-2028. Solidifying prospects across the server storage market is expected to benefit the Server and Storage Systems segment, which remains the key growth catalyst for the company. Its shares have gained 226.6% in the year-to-date period compared with the Zacks Computer & Technology sector’s growth of 11.4%. The strengthening Server and Storage Systems segment is expected to aid its overall financial performance in the near term. The Zacks Consensus Estimate for 2024 total revenues stands at $14.76 billion, indicating a sharp rise from the year-ago quarter’s reported figure of $7.12 billion. The consensus mark for earnings is pegged at $5.97 per share, suggesting a jump from the prior-year quarter’s figure of $1.63 per share.

Zacks Rank & Stocks to Consider

Currently, SMCI carries a Zacks Rank #3 (Hold). A better-ranked stock in the broader technology sector is Applied Materials ( AMAT Quick Quote AMAT - Free Report ) , carrying a Zacks Rank #2 (Buy) at present. You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here. Shares of Applied Materials have gained 28.8% in the year-to-date period. The long-term earnings growth rate for AMAT is 16.85%.

See More Zacks Research for These Tickers

Normally $25 each - click below to receive one report free:.

Intel Corporation (INTC) - free report >>

NVIDIA Corporation (NVDA) - free report >>

Applied Materials, Inc. (AMAT) - free report >>

Super Micro Computer, Inc. (SMCI) - free report >>

Published in

This file is used for Yahoo remarketing pixel add

computer memory research articles

Due to inactivity, you will be signed out in approximately:

Secondary Menu

4 duke cs students receive 2024 nsf graduate research fellowships, april 10, 2024.

4 Duke CS Students Receive 2024 NSF Graduate Research Fellowships

Four Duke CS students received NSF Graduate Research Fellowships :

  • Jonathan Donnelly , who worked with Cynthia Rudin and will pursue a PhD in Machine Learning at Duke.
  • Jabari Kwesi worked with Pardis Emami-Naeini and will pursue a PhD in Human Computer Interaction at Duke.
  • Megan Richards is a recent Duke ECE-CS grad who plans to pursue a PhD in ML. She worked with Mark Sendak at DIHI and Ricardo Henao of Duke ECE.
  • Ruoyu (Roy) Xie worked with Bhuwan Dhingra and will pursue a PhD in Natural Language Processing at Duke.

Congratulations to all!

Related Articles

AiiCE's Nicki Washington and Shaundra Daily are Creating Inclusivity in Computing

  • CS 50th Anniversary
  • Computing Resources
  • Event Archive
  • Location & Directions
  • AI for Social Good
  • Computational Social Choice
  • Computer Vision
  • Machine Learning
  • Natural Language Processing (NLP)
  • Reinforcement Learning
  • Search and Optimization
  • Computational Biochemistry and Drug Design
  • Computational Genomics
  • Computational Imaging
  • DNA and Molecular Computing
  • Algorithmic Game Theory
  • Social Choice
  • Computational Journalism
  • Broadening Participation in Computing
  • CS1/CS2 Learning, Pedagogy, and Curricula
  • Education Technology
  • Practical and Ethical Approaches to Software and Computing
  • Interdisciplinary Research in Data Science
  • Security & Privacy
  • Architecture
  • Computer Networks
  • Distributed Systems
  • High Performance Computing
  • Operating Systems
  • Quantum Computing
  • Approximation and Online Algorithms
  • Coding and Information Theory
  • Computational Complexity
  • Geometric Computing
  • Graph Algorithms
  • Numerical Analysis
  • Programming Languages
  • Why Duke Computer Science?
  • BS Concentration in Software Systems
  • BS Concentration in Data Science
  • BS Concentration in AI and Machine Learning
  • BA Requirements
  • Minors in Computer Science
  • 4+1 Program for Duke Undergraduates
  • IDM in Math + CS on Data Science
  • IDM in Linguistics + CS
  • IDM in Statistics + CS on Data Science
  • IDM in Visual & Media Studies (VMS) + CS
  • Graduation with Distinction
  • Independent Study
  • Identity in Computing Research
  • CS+ Summer Program
  • CS Related Student Organizations
  • Undergraduate Teaching Assistant (UTA) Information
  • Your Background
  • Schedule a Visit
  • All Prospective CS Undergrads
  • Admitted or Declared 1st Majors
  • First Course in CS
  • Duties and Commitment
  • Compensation
  • Trinity Ambassadors
  • Mentoring for CS Graduate Students
  • MSEC Requirements
  • Master's Options
  • Financial Support
  • MS Requirements
  • Concurrent Master's for Non-CS PhDs
  • Admission & Enrollment Statistics
  • PhD Course Requirements
  • Conference Travel
  • Frequently Asked Questions
  • Additional Graduate Student Resources
  • Graduate Awards
  • Undergraduate Courses
  • Graduate Courses
  • Spring 2024 Classes
  • Fall 2023 Classes
  • Spring 2023 Classes
  • Course Substitutions for Majors & Minors
  • Course Bulletin
  • Course Registration Logistics
  • Assisting Duke Students
  • For Current Students
  • Alumni Lectures - Spring 2024
  • News - Alumni
  • Primary Faculty
  • Secondary Faculty
  • Adjunct and Visiting Faculty
  • Emeriti - In Memoriam
  • Postdoctoral Fellows
  • Ph.D. Program
  • Masters in Computer Science
  • Masters in Economics and Computation
  • Affiliated Graduate Students

NC State ECE

Doctoral Student Receives NSF Graduate Research Fellowship

Congratulations to Cole Dickerson, just named a 2024 recipient of the NSF Graduate Research Fellowship, supporting his work on unmanned aerial platforms with AERPAW.

computer memory research articles

Cole Dickerson, an electrical engineering Ph.D. student advised by Ismail Guvenc, professor of electrical and computer engineering, has been awarded a prestigious Graduate Research Fellowship from the National Science Foundation.

The purpose of the NSF Graduate Research Fellowship Program (GRFP) is to help ensure the quality, vitality, and diversity of the scientific and engineering workforce of the United States. A goal of the program is to broaden participation of the full spectrum of diverse talents in STEM. The five-year fellowship provides three years of financial support.

Dickerson is part of the  AERPAW Initiative  under Guvenc, his doctoral advisor, with his research focusing on the convergence of 5G-wireless technology and autonomous drones. He graduated as a Brinkley-Lane Scholar from East Carolina University with a bachelor’s degree in electrical engineering and a minor in mathematics. He has co-authored three published research papers in electrical and ocean engineering conference proceedings.

“I’m very grateful to have had wonderful advisors here at NC State and during my undergraduate career. Dr. Ismail Guvenc, who is my Ph.D. advisor, and Dr. Dror Baron both encouraged me to apply for the fellowship and helped me through the revision process,” thanked Dickerson. “Dr. Tarek Abdel-Salam and Dr. Zhen Zhu at East Carolina University wrote wonderful letters for me and helped me build a CV that was competitive for this award. Winning this fellowship wouldn’t have been possible without all of their help and support. I am also very appreciative of the NSF for investing in me and, by extension, the AERPAW group.”

Based at NC State, AERPAW—Aerial Experimentation and Research Platform for Advanced Wireless—is the first wireless research platform to study the convergence of 5G technology and autonomous drones. AERPAW is funded by a $24 million grant, awarded by the PAWR Project Office on behalf of the National Science Foundation, to develop an advanced wireless research platform, led by NC State, in partnership with the Wireless Research Center of North Carolina, Mississippi State University and Renaissance Computing Institute (RENCI) at the University of North Carolina at Chapel Hill; additional partners include Town of Cary, City of Raleigh, North Carolina Department of Transportation, Purdue University, University of South Carolina, and many other academic, industry and municipal partners.

Unmanned aerial vehicles (UAVs) have garnered significant attention and enthusiasm for their diverse applications such as delivery services, agricultural monitoring, establishment of aerial base stations, search-and-rescue missions, and enforcement of wireless spectrum regulations. With the increasing proliferation of advanced UAV technology, airspace congestion is becoming a pressing concern, necessitating the establishment of a robust air traffic management system.

In response to this challenge, various government entities, industry leaders, and drone manufacturers are collaborating to develop a dependable and secure UAV Traffic Management (UTM) system. Amidst these efforts, Dickerson aims to investigate the integration of search-and-rescue operations and spectrum monitoring into the UTM framework.

Both search-and-rescue missions and spectrum enforcement rely on signal source search and localization capabilities, wherein UAVs are tasked with pinpointing signals from mobile phones of missing individuals or identifying signal jammers, respectively. Leveraging the advantages of higher altitude signal capture and the autonomous 3D maneuverability of UAVs, this approach has demonstrated greater efficacy compared to terrestrial methods.

His research encompasses three primary goals: Firstly, to conduct foundational research aimed at refining algorithms to enhance the speed and accuracy of signal localization in search-and-rescue and spectrum monitoring scenarios. Secondly, to seamlessly integrate these localization systems into the broader UTM infrastructure. Lastly, to validate and assess the proposed concepts through deployment and testing within the real-world wireless and UAV AERPAW testbed hosted at NC State.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

The Mind and Brain of Short-Term Memory

The past 10 years have brought near-revolutionary changes in psychological theories about short-term memory, with similarly great advances in the neurosciences. Here, we critically examine the major psychological theories (the “mind”) of short-term memory and how they relate to evidence about underlying brain mechanisms. We focus on three features that must be addressed by any satisfactory theory of short-term memory. First, we examine the evidence for the architecture of short-term memory, with special attention to questions of capacity and how—or whether—short-term memory can be separated from long-term memory. Second, we ask how the components of that architecture enact processes of encoding, maintenance, and retrieval. Third, we describe the debate over the reason about forgetting from short-term memory, whether interference or decay is the cause. We close with a conceptual model tracing the representation of a single item through a short-term memory task, describing the biological mechanisms that might support psychological processes on a moment-by-moment basis as an item is encoded, maintained over a delay with some forgetting, and ultimately retrieved.

INTRODUCTION

Mentally add 324 and 468. Follow the instructions to complete any form for your federal income taxes. Read and comprehend this sentence.

What are the features of the memory system that allows us to complete these and other complex tasks? Consider the opening example. First, you must create a temporary representation in memory for the two numbers. This representation needs to survive for several seconds to complete the task. You must then allocate your attention to different portions of the representation so that you can apply the rules of arithmetic required by the task. By one strategy, you need to focus attention on the “tens” digits (“2” and “6”) and mitigate interference from the other digits (e.g., “3” and “4”) and from partial results of previous operations (e.g., the “12” that results from adding “4” and “8”). While attending to local portions of the problem, you must also keep accessible the parts of the problem that are not in the current focus of attention (e.g., that you now have the units digit “2” as a portion of the final answer). These tasks implicate a short-term memory (STM). In fact, there is hardly a task that can be completed without the involvement of STM, making it a critical component of cognition.

Our review relates the psychological phenomena of STM to their underlying neural mechanisms. The review is motivated by three questions that any adequate account of STM must address:

1. What is its structure?

A proper theory must describe an architecture for short-term storage. Candidate components of this architecture include storage buffers, a moving and varying focus of attention, or traces with differing levels of activation. In all cases, it is essential to provide a mechanism that allows a representation to exist beyond the sensory stimulation that caused it or the process that retrieved the representation from long-term memory (LTM). This architecture should be clear about its psychological constructs. Furthermore, being clear about the neural mechanisms that implement those constructs will aid in development of psychological theory, as we illustrate below.

2. What processes operate on the stored information?

A proper theory must articulate the processes that create and operate on representations. Candidate processes include encoding and maintenance operations, rehearsal, shifts of attention from one part of the representation to another, and retrieval mechanisms. Some of these processes are often classified as executive functions.

3. What causes forgetting?

A complete theory of STM must account for the facts of forgetting. Traditionally, the two leading contending accounts of forgetting have relied on the concepts of decay and interference. We review the behavioral and neurophysiological evidence that has traditionally been brought to the table to distinguish decay and interference accounts, and we suggest a possible mechanism for short-term forgetting.

Most models of STM fall between two extremes: Multistore models view STM and LTM as architecturally separate systems that rely on distinct representations. By contrast, according to unitary-store models, STM and LTM rely largely on the same representations, but differ by ( a ) the level of activation of these representations and ( b ) some of the processes that normally act upon them. We focus on the distinctions drawn by these theories as we examine the evidence concerning the three questions that motivate our review. In this discussion, we assume that a representation in memory consists of a bundle of features that define a memorandum, including the context in which that memorandum was encountered.

WHAT IS THE STRUCTURE OF SHORT-TERM MEMORY?

Multistore models that differentiate short- and long-term memory.

In his Principles of Psychology , William James (1890) articulated the view that short-term (“primary”) memory is qualitatively different from long-term (“secondary”) memory (see also Hebb 1949 ). The most influential successor to this view is the model of STM developed by Baddeley and colleagues (e.g., Baddeley 1986 , 1992 ; Baddeley & Hitch 1974 ; Repov & Baddeley 2006 ). For the years 1980 to 2006, of the 16,154 papers that cited “working memory” in their titles or abstracts, fully 7339 included citations to Alan Baddeley.

According to Baddeley’s model, there are separate buffers for different forms of information. These buffers, in turn, are separate from LTM. A verbal buffer, the phonological loop, is assumed to hold information that can be rehearsed verbally (e.g., letters, digits). A visuospatial sketchpad is assumed to maintain visual information and can be further fractionated into visual/object and spatial stores ( Repov & Baddeley 2006 , Smith et al. 1995 ). An episodic buffer that draws on the other buffers and LTM has been added to account for the retention of multimodal information ( Baddeley 2000 ). In addition to the storage buffers described above, a central executive is proposed to organize the interplay between the various buffers and LTM and is implicated in controlled processing.

In short, the multistore model includes several distinctions: ( a ) STM is distinct from LTM, ( b ) STM can be stratified into different informational buffers based on information type, and ( c ) storage and executive processes are distinguishable. Evidence in support of these claims has relied on behavioral interference studies, neuropsychological studies, and neuroimaging data.

Evidence for the distinction between short- and long-term memory

Studies of brain-injured patients who show a deficit in STM but not LTM or vice versa lead to the implication that STM and LTM are separate systems. 1 Patients with parietal and temporal lobe damage show impaired short-term phonological capabilities but intact LTM( Shallice & Warrington 1970 , Vallar & Papagno 2002 ). Conversely, it is often claimed that patients with medial temporal lobe (MTL) damage demonstrate impaired LTM but preserved STM (e.g., Baddeley & Warrington 1970 , Scoville & Milner 1957 ; we reinterpret these effects below).

Neuroimaging data from healthy subjects have yielded mixed results, however. A meta-analysis comparing regions activated during verbal LTM and STM tasks indicated a great deal of overlap in neural activation for the tasks in the frontal and parietal lobes ( Cabeza et al. 2002 , Cabeza & Nyberg 2000 ). Three studies that directly compared LTM and STM in the same subjects did reveal some regions selective for each memory system ( Braver et al. 2001 , Cabeza et al. 2002 , Talmi et al. 2005 ). Yet, of these studies, only one found that the MTL was uniquely activated for LTM ( Talmi et al. 2005 ). What might account for the discrepancy between the neuropsychological and neuroimaging data?

One possibility is that neuroimaging tasks of STM often use longer retention intervals than those employed for neuropsychological tasks, making the STM tasks more similar to LTM tasks. In fact, several studies have shown that the MTL is important when retention intervals are longer than a few seconds ( Buffalo et al. 1998 , Cabeza et al. 2002 , Holdstock et al. 1995 , Owen et al. 1995 ). Of the studies that compared STM and LTM in the same subjects, only Talmi et al. (2005) used an STM retention interval shorter than five seconds. This study did find, in fact, that the MTL was uniquely recruited at longer retention intervals, providing support for the earlier neuropsychological work dissociating long- and short-term memory. As we elaborate below, however, there are other possible interpretations, especially with regard to the MTL’s role in memory.

Evidence for separate buffers in short-term memory

The idea that STM can be parceled into information-specific buffers first received support from a series of studies of selective interference (e.g., Brooks 1968 , den Heyer & Barrett 1971 ). These studies relied on the logic that if two tasks use the same processing mechanisms, they should show interfering effects on one another if performed concurrently. This work showed a double dissociation: Verbal tasks interfered with verbal STM but not visual STM, and visual tasks interfered with visual STM but not verbal STM, lending support to the idea of separable memory systems (for reviews, see Baddeley 1986 and Baddeley & Hitch 1974 ).

The advent of neuroimaging has allowed researchers to investigate the neural correlates of the reputed separability of STM buffers. Verbal STM has been shown to rely primarily on left inferior frontal and left parietal cortices, spatial STM on right posterior dorsal frontal and right parietal cortices, and object/visual STM on left inferior frontal, left parietal, and left inferior temporal cortices (e.g., Awh et al. 1996 , Jonides et al. 1993 , Smith & Jonides 1997 ; see review by Wager & Smith 2003 ). Verbal STM shows a marked left hemisphere preference, whereas spatial and object STM can be distinguished mainly by a dorsal versus ventral separation in posterior cortices (consistent with Ungerleider & Haxby 1994 ; see Baddeley 2003 for an account of the function of these regions in the service of STM).

The more recently postulated episodic buffer arose from the need to account for interactions between STM buffers and LTM. For example, the number of words recalled in an STM experiment can be greatly increased if the words form a sentence ( Baddeley et al. 1987 ). This “chunking” together of words to increase short-term capacity relies on additional information from LTM that can be used to integrate the words ( Baddeley 2000 ). Thus, there must be some representational space that allows for the integration of information stored in the phonological loop and LTM. This ability to integrate information from STM and LTM is relatively preserved even when one of these memory systems is damaged ( Baddeley & Wilson 2002 , Baddeley et al. 1987 ). These data provide support for an episodic buffer that is separable from other short-term buffers and from LTM ( Baddeley 2000 , Baddeley & Wilson 2002 ). Although neural evidence about the possible localization of this buffer is thin, there is some suggestion that dorsolateral prefrontal cortex plays a role ( Prabhakaran et al. 2000 , Zhang et al. 2004 ).

Evidence for separate storage and executive processes

Baddeley’s multistore model assumes that a collection of processes act upon the information stored in the various buffers. Jointly termed the “central executive,” these processes are assumed to be separate from the storage buffers and have been associated with the frontal lobes.

Both lesion and neuroimaging data support the distinction between storage and executive processes. For example, patients with frontal damage have intact STM under conditions of low distraction ( D’Esposito & Postle 1999 , 2000 ; Malmo 1942 ). However, when distraction is inserted during a delay interval, thereby requiring the need for executive processes to overcome interference, patients with frontal damage show significant memory deficits ( D’Esposito & Postle 1999 , 2000 ). By contrast, patients with left temporo-parietal damage show deficits in phonological storage, regardless of the effects of interference ( Vallar & Baddeley 1984 , Vallar & Papagno 2002 ).

Consistent with these patterns, a meta-analysis of 60 functional neuroimaging studies indicated that increased demand for executive processing recruits dorsolateral frontal cortex and posterior parietal cortex ( Wager & Smith 2003 ). By contrast, storage processes recruit predominately posterior areas in primary and secondary association cortex. These results corroborate the evidence from lesion studies and support the distinction between storage and executive processing.

Unitary-Store Models that Combine Short-Term and Long-Term Memory

The multistore models reviewed above combine assumptions about the distinction between short-term and long-term systems, the decomposition of short-term memory into information-specific buffers, and the separation of systems of storage from executive functions. We now consider unitary models that reject the first assumption concerning distinct systems.

Contesting the idea of separate long-term and short-term systems

The key data supporting separable short-term and long-term systems come from neuropsychology. To review, the critical contrast is between patients who show severely impaired LTM with apparently normal STM (e.g., Cave & Squire 1992 , Scoville & Milner 1957 ) and those who show impaired STM with apparently normal LTM (e.g., Shallice & Warrington 1970 ). However, questions have been raised about whether these neuropsychological studies do, in fact, support the claim that STM and LTM are separable. A central question is the role of the medial temporal lobe. It is well established that the MTL is critical for long-term declarative memory formation and retrieval ( Gabrieli et al. 1997 , Squire 1992 ). However, is the MTL also engaged by STM tasks? Much research with amnesic patients showing preserved STM would suggest not, but Ranganath & Blumenfeld (2005) have summarized evidence showing that MTL is engaged in short-term tasks (see also Ranganath & D’Esposito 2005 and Nichols et al. 2006 ).

In particular, there is growing evidence that a critical function of the MTL is to establish representations that involve novel relations. These relations may be among features or items, or between items and their context. By this view, episodic memory is a special case of such relations (e.g., relating a list of words to the experimental context in which the list was recently presented), and the special role of the MTL concerns its binding capabilities, not the timescale on which it operates. STM that is apparently preserved in amnesic patients may thus reflect a preserved ability to maintain and retrieve information that does not require novel relations or binding, in keeping with their preserved retrieval of remote memories consolidated before the amnesia-inducing lesion.

If this view is correct, then amnesic patients should show deficits in situations that require STM for novel relations, which they do (Hannula et al. 2005, Olson et al. 2006b ). They also show STM deficits for novel materials (e.g., Buffalo et al. 1998 , Holdstock et al. 1995 , Olson et al. 1995, 2006a ). As mentioned above, electrophysiological and neuroimaging studies support the claim that the MTL is active in support of short-term memories (e.g., Miyashita & Chang 1968 , Ranganath & D’Esposito 2001 ). Taken together, the MTL appears to operate in both STM and LTM to create novel representations, including novel bindings of items to context.

Additional evidence for the STM-LTM distinction comes from patients with perisylvian cortical lesions who are often claimed to have selective deficits in STM (e.g., Hanley et al. 1991 , Warrington & Shallice 1969 ). However, these deficits may be substantially perceptual. For example, patients with left perisylvian damage that results in STM deficits also have deficits in phonological processing in general, which suggests a deficit that extends beyond STM per se (e.g., Martin 1993 ).

The architecture of unitary-store models

Our review leads to the conclusion that short- and long-term memory are not architecturally separable systems—at least not in the strong sense of distinct underlying neural systems. Instead, the evidence points to a model in which short-term memories consist of temporary activations of long-term representations. Such unitary models of memory have a long history in cognitive psychology, with early theoretical unification achieved via interference theory ( Postman 1961 , Underwood & Schultz 1960). Empirical support came from demonstrations that memories in both the short and long term suffered from proactive interference (e.g., Keppel & Underwood 1962 ).

Perhaps the first formal proposal that short-term memory consists of activated long-term representations was by Atkinson & Shiffrin (1971 , but also see Hebb 1949) . The idea fell somewhat out of favor during the hegemony of the Baddeley multistore model, although it was given its first detailed computational treatment by Anderson (1983) . It has recently been revived and greatly developed by Cowan (1988 , 1995 , 2000) , McElree (2001) , Oberauer (2002) , Verhaeghen et al. (2004) , Anderson et al. (2004) , and others. The key assumption is the construct of a very limited focus of attention, although as we elaborate below, there are disagreements regarding the scope of the focus.

One shared assumption of these models is that STM consists of temporary activations of LTM representations or of representations of items that were recently perceived. The models differ from one to another regarding specifics, but Cowan’s model (e.g., Cowan 2000 ) is representative. According to this model, there is only one set of representations of familiar material—the representations in LTM. These representations can vary in strength of activation, where that strength varies as a function of such variables as recency and frequency of occurrence. Representations that have increased strength of activation are more available for retrieval in STM experiments, but they must be retrieved nonetheless to participate in cognitive action. In addition, these representations are subject to forgetting over time. A special but limited set of these representations, however, can be within the focus of attention, where being within the focus makes these representations immediately available for cognitive processing. According to this and similar models, then, STM is functionally seen as consisting of LTM representations that are either in the focus of attention or at a heightened level of activation.

These unitary-store models suggest a different interpretation of frontal cortical involvement in STM from multistore models. Early work showing the importance of frontal cortex for STM, particularly that of Fuster and Goldman-Rakic and colleagues, was first seen as support for multistore models (e.g., Funahashi et al. 1989 , Fuster 1973 , Jacobsen 1936 , Wilson et al. 1993 ). For example, single-unit activity in dorsolateral prefrontal cortex regions (principal sulcus, inferior convexity) that was selectively responsive to memoranda during the delay interval was interpreted as evidence that these regions were the storage sites for STM. However, the sustained activation of frontal cortex during the delay period does not necessarily mean that this region is a site of STM storage. Many other regions of neo-cortex also show activation that outlasts the physical presence of a stimulus and provides a possible neural basis for STM representations (see Postle 2006 ). Furthermore, increasing evidence suggests that frontal activations reflect the operation of executive processes [including those needed to keep the representations in the focus of attention; see reviews by Postle (2006) , Ranganath & D’Esposito (2005) , Reuter-Lorenz & Jonides (2007) , and Ruchkin et al. (2003) ]. Modeling work and lesion data provide further support for the idea that the representations used in both STM and LTM are stored in those regions of cortex that are involved in initial perception and encoding, and that frontal activations reflect processes involved in selecting this information for the focus of attention and keeping it there ( Damasio 1989 , McClelland et al. 1995 ).

The principle of posterior storage also allows some degree of reconciliation between multi- and unitary-store models. Posterior regions are clearly differentiated by information type (e.g., auditory, visual, spatial), which could support the information-specific buffers postulated by multistore models. Unitary-store models focus on central capacity limits, irrespective of modality, but they do allow for separate resources ( Cowan 2000 ) or feature components ( Lange & Oberauer 2005 , Oberauer & Kliegl 2006 ) that occur at lower levels of perception and representation. Multi- and unitary-store models thus both converge on the idea of modality-specific representations (or components of those representations) supported by distinct posterior neural systems.

Controversies over Capacity

Regardless of whether one subscribes to multi- or unitary-store models, the issue of how much information is stored in STM has long been a prominent one ( Miller 1956 ). Multistore models explain capacity estimates largely as interplay between the speed with which information can be rehearsed and the speed with which information is forgotten ( Baddeley 1986 , 1992 ; Repov & Baddeley 2006 ). Several studies have measured this limit by demonstrating that approximately two seconds worth of verbal information can be re-circulated successfully (e.g., Baddeley et al. 1975 ).

Unitary-store models describe capacity as limited by the number of items that can be activated in LTM, which can be thought of as the bandwidth of attention. However, these models differ on what that number or bandwidth might be. Cowan (2000) suggested a limit of approximately four items, based on performance discontinuities such as errorless performance in immediate recall when the number of items is less than four, and sharp increases in errors for larger numbers. (By this view, the classic “seven plus or minus two” is an overestimate because it is based on studies that allowed participants to engage in processes of rehearsal and chunking, and reflected contributions of both the focus and LTM; see also Waugh & Norman 1965 .) At the other extreme are experimental paradigms suggesting that the focus of attention consists of a single item ( Garavan 1998 , McElree 2001 , Verhaeghen & Basak 2007 ). We briefly consider some of the central issues behind current controversies concerning capacity estimates.

Behavioral and neural evidence for the magic number 4

Cowan (2000) has reviewed an impressive array of studies leading to his conclusion that the capacity limit is four items, plus or minus one (see his Table 1). Early behavioral evidence came from studies showing sharp drop-offs in performance at three or four items on short-term retrieval tasks (e.g., Sperling 1960 ). These experiments were vulnerable to the criticism that this limit might reflect output interference occurring during retrieval rather than an actual limit on capacity. However, additional evidence comes from change-detection and other tasks that do not require the serial recall of individual items. For example, Luck & Vogel (1997) presented subjects with 1 to 12 colored squares in an array. After a blank interval of nearly a second, another array of squares was presented, in which one square may have changed color. Subjects were to respond whether the arrays were identical. These experiments and others that avoid the confound of output-interference (e.g., Pashler 1988 ) likewise have yielded capacity estimates of approximately four items.

Electrophysiological and neuroimaging studies also support the idea of a four-item capacity limit. The first such report was by Vogel & Machizawa (2004) , who recorded event-related potentials (ERPs) from subjects as they performed a visual change-detection task. ERP recording shortly after the onset of the retention interval in this task indicated a negative-going wave over parietal and occipital sites that persisted for the duration of the retention interval and was sensitive to the number of items held in memory. Importantly, this signal plateaued when array size reached between three and four items. The amplitude of this activity was strongly correlated with estimates of each subject’s memory capacity and was less pronounced on incorrect than correct trials, indicating that it was causally related to performance. Subsequent functional magnetic resonance imaging (fMRI) studies have observed similar load- and accuracy-dependent activations, especially in intraparietal and intraoccipital sulci ( Todd & Marois 2004 , 2005 ). These regions have been implicated by others (e.g., Yantis & Serences 2003 ) in the control of attentional allocation, so it seems plausible that one rate-limiting step in STM capacity has to do with the allocation of attention ( Cowan 2000 ; McElree 1998 , 2001 ; Oberauer 2002 ).

Evidence for more severe limits on focus capacity

Another set of researchers agree there is a fixed capacity, but by measuring a combination of response time and accuracy, they contend that the focus of attention is limited to just one item (e.g., Garavan 1998 , McElree 2001 , Verhaeghen & Basak 2007 ). For example, Garavan (1998) required subjects to keep two running counts in STM, one for triangles and one for squares—as shape stimuli appeared one after another in random order. Subjects controlled their own presentation rate, which allowed Garavan to measure the time spent processing each figure before moving on. He found that responses to a figure of one category (e.g., a triangle) that followed a figure from the other category (e.g., a square) were fully 500 milliseconds longer than responses to the second of two figures from the same category (e.g., a triangle followed by another triangle). These findings suggested that attention can be focused on only one internal counter in STM at a time. Switching attention from one counter to another incurred a substantial cost in time. Using a speed-accuracy tradeoff procedure, McElree (1998) came to the same conclusion that the focus of attention contained just one item. He found that the retrieval speed for the last item in a list was substantially faster than for any other item in the list, and that other items were retrieved at comparable rates to each other even though the accuracy of retrieval for these other items varied.

Oberauer (2002) suggested a compromise solution to the “one versus four” debate. In his model, up to four items can be directly accessible, but only one of these items can be in the focus of attention. This model is similar to that of Cowan (2000) , but adds the assumption that an important method of accessing short-term memories is to focus attention on one item, depending on task demands. Thus, in tasks that serially demand attention on several items (such as those of Garavan 1998 or McElree 2001 ), the mechanism that accomplishes this involves changes in the focus of attention among temporarily activated representations in LTM.

Alternatives to capacity limits based on number of items

Attempting to answer the question of how many items may be held in the focus implicitly assumes that items are the appropriate unit for expressing capacity limits. Some reject this basic assumption. For example, Wilken & Ma (2004) demonstrated that a signal-detection account of STM, in which STM capacity is primarily constrained by noise, better fit behavioral data than an item-based fixed-capacity model. Recent data from change-detection tasks suggest that object complexity ( Eng et al. 2005 ) and similarity ( Awh et al. 2007 ) play an important role in determining capacity. Xu & Chun (2006) offer neuroimaging evidence that may reconcile the item-based and complexity accounts: In a change-detection task, they found that activation of inferior intra-parietal sulcus tracked a capacity limit of four, but nearby regions were sensitive to the complexity of the memoranda, as were the behavioral results.

Other researchers disagree with fixed item-based limits because they have demonstrated that the limit is mutable. Practice may improve subjects’ ability to use processes such as chunking to allow greater functional capacities ( McElree 1998 , Verhaeghen et al. 2004 ; but see Oberauer 2006 ). However, this type of flexibility appears to alter the amount of information that can be compacted into a single representation rather than the total number of representations that can be held in STM ( Miller 1956 ). The data of Verhaegen et al. (2004; see Figure 5 of that paper) suggest that the latter number still approximates four, consistent with Cowan’s claims.

Building on these findings, we suggest a new view of capacity. The fundamental idea that attention can be allocated to one piece of information in memory is correct, but the definition of what that one piece is needs to be clarified. It cannot be that just one item is in the focus of attention because if that were so, hardly any computation would be possible. How could one add 3+4, for example, if at any one time, attention could be allocated only to the “3” or the “4” or the “+” operation? We propose that attention focuses on what is bound together into a single “functional context,” whether that context is defined by time, space, some other stimulus characteristic such as semantic or visual similarity or momentary task relevance. By this account, attention can be placed on the whole problem “3+4,” allowing relevant computations to be made. Complexity comes into play by limiting the number of subcomponents that can be bound into one functional context.

What are we to conclude from the data concerning the structure of STM? We favor the implication that the representational bases for perception, STM, and LTM are identical. That is, the same neural representations initially activated during the encoding of a piece of information show sustained activation during STM (or retrieval from LTM into STM; Wheeler et al. 2000 ) and are the repository of long-term representations. Because regions of neocortex represent different sorts of information (e.g., verbal, spatial), it is reasonable to expect that STM will have an organization by type of material as well. Functionally, memory in the short term seems to consist of items in the focus of attention along with recently attended representations in LTM. These items in the focus of attention number no more than four, and they may be limited to just a single representation (consisting of items bound within a functional context).

We turn below to processes that operate on these representations.

WHAT PROCESSES OPERATE ON THE STORED INFORMATION?

Theoretical debate about the nature of STM has been dominated by discussion of structure and capacity, but the issue of process is also important. Verbal rehearsal is perhaps most intuitively associated with STM and plays a key role in the classic model ( Baddeley 1986 ). However, as we discuss below, rehearsal most likely reflects a complex strategy rather than a primitive STM process. Modern approaches offer a large set of candidate processes, including encoding and maintenance ( Ranganath et al. 2004 ), attention shifts ( Cowan 2000 ), spatial rehearsal ( Awh & Jonides 2001 ), updating (Oberauer 2005), overwriting ( Neath & Nairne 1995 ), cue-based parallel retrieval ( McElree 2001 ), and interference-resolution ( Jonides & Nee 2006 ).

Rather than navigating this complex and growing list, we take as our cornerstone the concept of a limited focus of attention. The central point of agreement for the unitary-store models discussed above is that there is a distinguishable focus of attention in which representations are directly accessible and available for cognitive action. Therefore, it is critical that all models must identify the processes that govern the transition of memory representations into and out of this focused state.

The Three Core Processes of Short-Term Memory: Encoding, Maintenance, and Retrieval

If one adopts the view that a limited focus of attention is a key feature of short-term storage, then understanding processing related to this limited focus amounts to understanding three basic types of cognitive events 2 : ( a ) encoding processes that govern the transformation from perceptual representations into the cognitive/attentional focus, ( b ) maintenance processes that keep information in the focus (and protect it from interference or decay), and ( c ) retrieval processes that bring information from the past back into the cognitive focus (possibly reactivating perceptual representations).

Encoding of items into the focus

Encoding processes are the traditional domain of theories of perception and are not treated explicitly in any of the current major accounts of STM. Here we outline three implicit assumptions about encoding processes made in most accounts of STM, and we assess their empirical and theoretical support.

First, the cognitive focus is assumed to have immediate access to perceptual processing— that is, the focus may include contents from the immediate present as well as contents retrieved from the immediate past. In Cowan’s (2000) review of evidence in favor of the number four in capacity estimates, several of the experimental paradigms involve focused representations of objects in the immediate perceptual present or objects presented less than a second ago. These include visual tracking experiments ( Pylyshyn et al. 1994 ), enumeration ( Trick & Pylyshyn 1993 ), and whole report of spatial arrays and spatiotemporal arrays ( Darwin et al. 1972 , Sperling 1960 ). Similarly, in McElree’s (2006) and Garavan’s (1998) experiments, each incoming item in the stream of material (words or letters or objects) is assumed to be represented momentarily in the focus.

Second, all of the current theories assume that perceptual encoding into the focus of attention results in a displacement of other items from the focus. For example, in McElree’s single-item focus model, each incoming item not only has its turn in the focus, but it also replaces the previous item. On the one hand, the work reviewed above regarding performance discontinuities after the putative limit of STM capacity has been reached appears to support the idea of whole-item displacement. On the other hand, as also described above, this limit may be susceptible to factors such as practice and stimulus complexity. An alternative to whole-item displacement as the basis for interference is a graded similarity-based interference, in which new items entering the focus may partially overwrite features of the old items or compete with old items to include those featural components in their representations as a function of their similarity. At some level, graded interference is clearly at work in STM, as Nairne (2002) and others have demonstrated (we review this evidence in more detail below). But the issue at hand is whether the focus is subject to such graded interference, and if such interference is the process by which encoding (or retrieving) items into the focus displaces prior items. Although there does not appear to be evidence that bears directly on this issue (the required experiments would involve manipulations of similarity in just the kinds of paradigms that Cowan, McElree, Oberauer, and others have used to provide evidence for the limited focus), the performance discontinuities strongly suggest that something like displacement is at work.

Third, all of the accounts assume that perceptual encoding does not have obligatory access to the focus. Instead, encoding into the focus is modulated by attention. This follows rather directly from the assumptions about the severe limits on focus capacity: There must be some controlled way of directing which aspects of the perceptual present, as well as the cognitive past, enter into the focused state. Stated negatively, there must be some way of preventing aspects of the perceptual present from automatically entering into the focused state. Postle (2006) recently found that increased activity in dorsolateral prefrontal cortex during the presentation of distraction during a retention interval was accompanied by a selective decrease in inferior temporal cortical activity. This pattern suggests that prefrontal regions selectively modulated posterior perceptual areas to prevent incoming sensory input from disrupting the trace of the task-relevant memorandum.

In summary, current approaches to STM have an obligation to account for how controlled processes bring relevant aspects of perception into cognitive focus and leave others out. It is by no means certain that existing STM models and existing models of perceptual attention are entirely compatible on this issue, and this is a matter of continued lively debate ( Milner 2001 , Schubert & Frensch 2001 , Woodman et al. 2001 ).

Maintenance of items in the focus

Once an item is in the focus of attention, what keeps it there? If the item is in the perceptual present, the answer is clear: attention-modulated, perceptual encoding. The more pressing question is: What keeps something in the cognitive focus when it is not currently perceived? For many neuroscientists, this is the central question of STM—how information is held in mind for the purpose of future action after the perceptual input is gone. There is now considerable evidence from primate models and from imaging studies on humans for a process of active maintenance that keeps representations alive and protects them from irrelevant incoming stimuli or intruding thoughts (e.g., Postle 2006 ).

We argue that this process of maintenance is not the same as rehearsal. Indeed, the number of items that can be maintained without rehearsal forms the basis of Cowan’s (2000) model. Under this view, rehearsal is not a basic process but rather is a strategy for accomplishing the functional demands for sustaining memories in the short term—a strategy composed of a series of retrievals and re-encodings. We consider rehearsal in more detail below, but we consider here the behavioral and neuroimaging evidence for maintenance processes.

There is now considerable evidence from both primate models and human electroencephalography and fMRI studies for a set of prefrontal-posterior circuits underlying active maintenance. Perhaps the most striking is the classic evidence from single-cell recordings showing that some neurons in prefrontal cortex fire selectively during the delay period in delayed-match-to-sample tasks (e.g., Funahashi et al. 1989 , Fuster 1973 ). As mentioned above, early interpretations of these frontal activations linked them directly to STM representations ( Goldman-Rakic 1987 ), but more recent theories suggest they are part of a frontal-posterior STM circuit that maintains representations in posterior areas ( Pasternak & Greenlee 2005 , Ranganath 2006 , Ruchkin et al. 2003 ). Furthermore, as described above, maintenance operations may modulate perceptual encoding to prevent incoming perceptual stimuli from disrupting the focused representation in posterior cortex ( Postle 2006 ). Several computational neural-network models of circuits for maintenance hypothesize that prefrontal cortical circuits support attractors, self-sustaining patterns observed in certain classes of recurrent networks ( Hopfield 1982 , Rougier et al. 2005 , Polk et al. 2002 ). A major challenge is to develop computational models that are able to engage in active maintenance of representations in posterior cortex while simultaneously processing, to some degree, incoming perceptual material (see Renart et al. 1999 for a related attempt).

Retrieval of items into the focus

Many of the major existing STM architectures are silent on the issue of retrieval. However, all models that assume a limited focus also assume that there is some means by which items outside that focus (either in a dormant long-term store or in some highly activated portion of LTM) are brought into the focus by switching the attentional focus onto those items. Following Sternberg (1966) , McElree (2006) , and others, we label this process “retrieval.” Despite this label, it is important to keep in mind that the associated spatial metaphor of an item moving from one location to another is misleading given our assumption about the common neural representations underlying STM and LTM.

There is now considerable evidence, mostly from mathematical models of behavioral data, that STM retrieval of item information is a rapid, parallel, content-addressable process. The current emphasis on parallel search processes is quite different from the earliest models of STM retrieval, which postulated a serial scanning process (i.e., Sternberg 1966 ; see McElree 2006 for a recent review and critique). Serial-scanning models fell out of favor because of empirical and modeling work showing that parallel processes provide a better account of the reaction time distributions in STM tasks (e.g., Hockley 1984 ). For example, McElree has created a variation on the Sternberg recognition probe task that provides direct support for parallel, rather than serial, retrieval. In the standard version of the task, participants are presented with a memory set consisting of a rapid sequence of verbal items (e.g., letters or digits), followed by a probe item. The task is to identify whether the probe was a member of the memory set. McElree & Dosher’s (1989) innovation was to manipulate the deadline for responding. The time course of retrieval (accuracy as a function of response deadline) can be separately plotted for each position within the presentation sequence, allowing independent assessments of accessibility (how fast an item can be retrieved) and availability (asymptotic accuracy) as a function of set size and serial position. Many experiments yield a uniform rate of access for all items except for the most recent item, which is accessed more quickly. The uniformity of access rate is evidence for parallel access, and the distinction between the most recent item and the other items is evidence for a distinguished focus of attention.

Neural Mechanisms of Short- and Long-Term Memory Retrieval

The cue-based retrieval processes described above for STM are very similar to those posited for LTM (e.g., Anderson et al. 2004 , Gillund & Shiffrin 1984 , Murdock 1982 ). As a result, retrieval failures resulting from similarity-based interference and cue overlap are ubiquitous in both STM and LTM. Both classic studies of recall from STM (e.g., Keppel & Underwood 1962 ) and more recent studies of interference in probe-recognition tasks (e.g., Jonides & Nee 2006 , McElree & Dosher 1989 , Monsell 1978 ) support the idea that interference plays a major role in forgetting over short retention intervals as well as long ones (see below). These common effects would not be expected if STM retrieval were a different process restricted to operate over a limited buffer, but they are consistent with the notion that short-term and long-term retrieval are mediated by the same cue-based mechanisms.

The heavy overlap in the neural substrates for short-term and long-term retrieval provides additional support for the idea that retrieval processes are largely the same over different retention intervals. A network of medial temporal regions, lateral prefrontal regions, and anterior prefrontal regions has been extensively studied and shown to be active in long-term retrieval tasks (e.g., Buckner et al. 1998 , Cabeza & Nyberg 2000 , Fletcher & Henson 2001 ). We reviewed above the evidence for MTL involvement in both short- and long-term memory tasks that require novel representations (see section titled “Contesting the Idea of Separate Long-Term and Short-Term Systems”). Here, we examine whether the role of frontal cortex is the same for both short- and long-term retrieval.

The conclusion derived from neuroimaging studies of various different STM procedures is that this frontal role is the same in short-term and long-term retrieval. For example, several event-related fMRI studies of the retrieval stage of the probe-recognition task found increased activation in lateral prefrontal cortex similar to the activations seen in studies of LTM retrieval (e.g., D’Esposito et al. 1999 , D’Esposito & Postle 2000 , Manoach et al. 2003 ). Badre & Wagner (2005) also found anterior prefrontal activations that overlapped with regions implicated in episodic recollection. The relatively long retention intervals often used in event-related fMRI studies leaves them open to the criticism that by the time of the probe, the focus of attention has shifted elsewhere, causing the need to retrieve information from LTM (more on this discussion below). However, a meta-analysis of studies that involved bringing very recently presented items to the focus of attention likewise found specific involvement of lateral and anterior prefrontal cortex ( Johnson et al. 2005 ). These regions appear to be involved in retrieval, regardless of timescale.

The same conclusion may be drawn from recent imaging studies that have directly compared long- and short-term retrieval tasks using within-subjects designs ( Cabeza et al. 2002 , Ranganath et al. 2003 , Talmi et al. 2005 ). Ranganath et al. (2003) found the same bilateral ventrolateral and dorsolateral prefrontal regions engaged in both short- and long-term tasks. In some cases, STM and LTM tasks involve the same regions but differ in the relative amount of activation shown within those regions. For example, Cabeza et al. (2002) reported similar engagement of medial temporal regions in both types of task, but greater anterior and ventrolateral activation in the long-term episodic tasks. Talmi et al. (2005) reported greater activation in both medial temporal and lateral frontal cortices for recognition probes of items presented early in a 12-item list (presumably necessitating retrieval from LTM) versus items presented later in the list (presumably necessitating retrieval from STM). One possible reason for this discrepancy is that recognition for late-list items did not require retrieval because these items were still in the focus of attention. This account is plausible since late-list items were drawn either from the last-presented or second-to-last presented item and preceded the probe by less than two seconds.

In summary, the bulk of the neuroimaging evidence points to the conclusion that the activation of frontal and medial temporal regions depends on whether the information is currently in or out of focus, not whether the task nominally tests STM or LTM. Similar reactivation processes occur during retrieval from LTM and from STM when the active maintenance has been interrupted (see Sakai 2003 for a more extensive review).

The Relationship of Short-Term Memory Processes to Rehearsal

Notably, our account of core STM processes excludes rehearsal. How does rehearsal fit in? We argue that rehearsal is simply a controlled sequence of retrievals and re-encodings of items into the focus of attention ( Baddeley 1986 , Cowan 1995 ). The theoretical force of this assumption can be appreciated by examining the predictions it makes when coupled with our other assumptions about the structures and processes of the underlying STM architecture. Below we outline these predictions and the behavioral, developmental, neuroimaging, and computational work that support this view.

Rehearsal as retrieval into the focus

When coupled with the idea of a single-item focus, the assumption that rehearsal is a sequence of retrievals into the focus of attention makes a very clear prediction: A just-rehearsed item should display the same retrieval dynamics as a just-perceived item. McElree (2006) directly tested this prediction using a version of his response-deadline recognition task, in which subjects were given a retention interval between presentation of the list and the probe rather than presented with the probe immediately after the list. Subjects were explicitly instructed to rehearse the list during this interval and were trained to do so at a particular rate. By controlling the rate, it was possible to know when each item was rehearsed and hence re-established in the focus. The results were compelling: When an item was predicted to be in focus because it had just been rehearsed, it showed the same fast retrieval dynamics as an item that had just been perceived. In short, the speed-accuracy tradeoff functions showed the familiar in-focus/out-of-focus dichotomy of the standard paradigm, but the dichotomy was established for internally controlled rehearsal as well as externally controlled perception.

Rehearsal as strategic retrieval

Rehearsal is often implicitly assumed as a component of active maintenance, but formal theoretical considerations of STM typically take the opposite view. For example, Cowan (2000) provides evidence that although first-grade children do not use verbal rehearsal strategies, they nevertheless have measurable focus capacities. In fact, Cowan (2000) uses this evidence to argue that the performance of very young children is revealing of the fundamental capacity limits of the focus of attention because it is not confounded with rehearsal.

If rehearsal is the controlled composition of more primitive STM processes, then rehearsal should activate the same brain circuits as the primitive processes, possibly along with additional (frontal) circuits associated with their control. In other words, there should be overlap of rehearsal with brain areas sub-serving retrieval and initial perceptual encoding. Likewise, there should be control areas distinct from those of the primitive processes.

Both predictions receive support from neuroimaging studies. The first prediction is broadly confirmed: There is now considerable evidence for the reactivation of areas associated with initial perceptual encoding in tasks that require rehearsal (see Jonides et al. 2005 for a recent review; note also that evidence exists for reactivation in LTM retrieval: Wheeler 2000 , 2006 ).

The second prediction—that rehearsal engages additional control areas beyond those participating in maintenance, encoding, and retrieval—receives support from two effects. One is that verbal rehearsal engages a set of frontal structures associated with articulation and its planning: supplementary motor, premotor, inferior frontal, and posterior parietal areas (e.g., Chein & Fiez 2001, Jonides et al. 1998 , Smith & Jonides 1999 ). The other is that spatial rehearsal engages attentionally mediated occipital regions, suggesting rehearsal processes that include retrieval of spatial information ( Awh et al. 1998 , 1999 , 2001 ).

Computational modeling relevant to strategic retrieval

Finally, prominent symbolic and connectionist computational models of verbal STM tasks are based on architectures that do not include rehearsal as a primitive process, but rather assume it as a strategic composition of other processes operating over a limited focus. The Burgess & Hitch (2005 , 2006) connectionist model, the Executive-Process/Interactive Control (EPIC) symbolic model ( Meyer and Kieras 1997 ), and the Atomic Components of Thought (ACT-R) hybrid model ( Anderson & Matessa 1997 ) all assume that rehearsal in verbal STM consists of a controlled sequence of retrievals of items into a focused state. They all assume different underlying mechanisms for the focus (the Burgess & Hitch model has a winner-take-all network; ACT-R has an architectural buffer with a capacity of one chunk; EPIC has a special auditory store), but all assume strategic use of this focus to accomplish rehearsal. These models jointly represent the most successful attempts to account for a range of detailed empirical phenomena traditionally associated with rehearsal, especially in verbal serial recall tasks. Their success therefore provides further support for the plausibility of a compositional view of rehearsal.

WHY DO WE FORGET?

Forgetting in STM is a vexing problem: What accounts for failures to retrieve something encoded just seconds ago? There are two major explanations for forgetting, often placed in opposition: time-based decay and similarity-based interference. Below, we describe some of the major findings in the literature related to each of these explanations, and we suggest that they may ultimately result from the same underlying principles.

Decay Theories: Intuitive but Problematic

The central claim of decay theory is that as time passes, information in memory erodes, and so it is less available for later retrieval. This explanation has strong intuitive appeal. However, over the years there have been sharp critiques of decay, questioning whether it plays any role at all (for recent examples, see Lewandowsky et al. 2004 and the review in this journal by Nairne 2002 ).

Decay explanations are controversial for two reasons: First, experiments attempting to demonstrate decay can seldom eliminate alternative explanations. For example, Keppel & Underwood (1962) demonstrated that forgetting in the classic Brown-Peterson paradigm (designed to measure time-based decay) was due largely, if not exclusively, to proactive interference from prior trials. Second, without an explanation of how decay occurs, it is difficult to see decay theories as more than a restatement of the problem. Some functional arguments have been made for the usefulness of the notion of memory decay—that decaying activations adaptively mirror the likelihood that items will need to be retrieved ( Anderson & Schooler 1991 ), or that decay is functionally necessary to reduce interference ( Altmann & Gray 2002 ). Nevertheless, McGeoch’s famous (1932) criticism of decay theories still holds merit: Rust does not occur because of time itself, but rather from oxidation processes that occur with time. Decay theories must explain the processes by which decay could occur, i.e., they must identify the oxidation process in STM.

Retention-interval confounds: controlling for rehearsal and retroactive interference

The main problem in testing decay theories is controlling for what occurs during the retention interval. Many experiments include an attention-demanding task to prevent participants from using rehearsal that would presumably circumvent decay. However, a careful analysis of these studies by Roediger et al. (1977) makes one wonder whether the use of a secondary task is appropriate to prevent rehearsal at all. They compared conditions in which a retention interval was filled by nothing, by a relatively easy task, or by a relatively difficult one. Both conditions with a filled interval led to worse memory performance, but the difficulty of the intervening task had no effect. Roediger et al. (1977) concluded that the primary memory task and the interpolated task, although demanding, used different processing pools of resources, and hence the interpolated tasks may not have been effective in preventing rehearsal. So, they argued, this sort of secondary-task technique may not prevent rehearsal and may not allow for a convincing test of a decay hypothesis.

Another problem with tasks that fill the retention interval is that they require subjects to use STM (consider counting backward, as in the Brown-Peterson paradigm). This could lead to active displacement of items from the focus according to views (e.g., McElree 2001 ) that posit such displacement as a mechanism of STM forgetting, or increase the noise according to interference-based explanations (see discussion below in What Happens Neurally During the Delay?). By either account, the problem with retention-interval tasks is that they are questionable ways to prevent rehearsal of the to-be-remembered information, and they introduce new, distracting information that may engage STM. This double-edged sword makes it difficult to tie retention-interval manipulations directly to decay.

Attempts to address the confounding factors

A potential way out of the rehearsal conundrum is to use stimuli that are not easily converted to verbal codes and that therefore may be difficult to rehearse. For example, Harris (1952) used tones that differed so subtly in pitch that they would be difficult to name by subjects without perfect pitch. On each trial, participants were first presented with a to-be-remembered tone, followed by a retention interval of 0.1 to 25 seconds, and finally a probe tone. The accuracy of deciding whether the initial and probe tones were the same declined with longer retention intervals, consistent with the predictions of decay theory.

Using another technique, McKone (1995 , 1998) reduced the probability of rehearsal or other explicit-memory strategies by using an implicit task. Words and nonwords were repeated in a lexical-decision task, with the measure of memory being faster performance on repeated trials than on novel ones (priming). To disentangle the effects of decay and interference, McKone varied the time between repetitions (the decay-related variable) while holding the number of items between repetitions (the interference-related variable) constant, and vice versa. She found that greater time between repetitions reduced priming even after accounting for the effects of intervening items, consistent with decay theory. However, interference and decay effects seemed to interact and to be especially important for nonwords.

Procedures such as those used by Harris (1952) and McKone (1995 , 1998) do not have the problems associated with retention-interval tasks. They are, however, potentially vulnerable to the criticism of Keppel & Underwood (1962) regarding interference from prior trials within the task, although McKone’s experiments address this issue to some degree. Another potential problem is that these participants’ brains and minds are not inactive during the retention interval ( Raichle et al. 2001 ). There is increasing evidence that the processes ongoing during nominal “resting states” are related to memory, including STM ( Hampson et al. 2006 ). Spontaneous retrieval by participants during the retention interval could interfere with memory for the experimental items. So, although experiments that reduce the influence of rehearsal provide some of the best evidence of decay, they are not definitive.

What happens neurally during the delay?

Neural findings of delay-period activity have also been used to support the idea of decay. For example, at the single-cell level, Fuster (1995) found that in monkeys performing a delayed-response task, delay-period activity in inferotemporal cortex steadily declined over 18 seconds (see also Pasternak & Greenlee 2005 ). At a molar level, human neuroimaging studies often show delay-period activity in prefrontal and posterior regions, and this activity is often thought to support maintenance or storage (see review by Smith & Jonides 1999 ). As reviewed above, it is likely that the posterior regions support storage and that frontal regions support processes related to interference-resolution, control, attention, response preparation, motivation, and reward.

Consistent with the suggestive primate data, Jha & McCarthy (2000) found a general decline in activation in posterior regions over a delay period, which suggests some neural evidence for decay. However, this decline in activation was not obviously related to performance, which suggests two (not mutually exclusive) possibilities: ( a ) the decline in activation was not representative of decay, so it did not correlate with performance, or ( b ) these regions might not have been storage regions (but see Todd & Marois 2004 and Xu & Chun 2006 for evidence more supportive of load sensitivity in posterior regions).

The idea that neural activity decays also faces a serious challenge in the classic results of Malmo (1942) , who found that a monkey with frontal lesions was able to perform a delayed response task extremely well (97% correct) if visual stimulation and motor movement (and therefore associated interference) were restricted during a 10-second delay. By contrast, in unrestricted conditions, performance was as low as 25% correct (see also Postle & D’Esposito 1999 ). In summary, evidence for time-based declines in neural activity that would naturally be thought to be part of a decay process is at best mixed.

Is there a mechanism for decay?

Although there are data supporting the existence of decay, much of these data are subject to alternative, interference-based explanations. However, as Crowder (1976) noted, “Good ideas die hard.” At least a few key empirical results ( Harris 1952 ; McKone 1995 , 1998) do seem to implicate some kind of time-dependent decay. If one assumes that decay happens, how might it occur?

One possibility—perhaps most compatible with results like those of Malmo (1942) —is that what changes over time is not the integrity of the representation itself, but the likelihood that attention will be attracted away from it. As more time passes, the likelihood increases that attention will be attracted away from the target and toward external stimuli or other memories, and it will be more difficult to return to the target representation. This explanation seems compatible with the focus-of-attention views of STM that we have reviewed. By this explanation, capacity limits are a function of attention limits rather than a special property of STM per se.

Another explanation, perhaps complementary to the first, relies on stochastic variability in the neuronal firing patterns that make up the target representation. The temporal synchronization of neuronal activity is an important part of the representation (e.g., Deiber et al. 2007 , Jensen 2006 , Lisman & Idiart 1995 ). As time passes, variability in the firing rates of individual neurons may cause them to fall increasingly out of synchrony unless they are reset (e.g., by rehearsal). As the neurons fall out of synchrony, by this hypothesis, the firing pattern that makes up the representation becomes increasingly difficult to discriminate from surrounding noise [see Lustig et al. (2005) for an example that integrates neural findings with computational ( Frank et al. 2001 ) and behaviorally based ( Brown et al. 2000 ) models of STM].

Interference Theories: Comprehensive but Complex

Interference effects play several roles in memory theory: First, they are the dominant explanation of forgetting. Second, some have suggested that STM capacity and its variation among individuals are largely determined by the ability to overcome interference (e.g., Hasher & Zacks 1988 , Unsworth & Engle 2007 ). Finally, differential interference effects in STM and LTM have been used to justify the idea that they are separate systems, and common interference effects have been used to justify the idea that they are a unitary system.

Interference theory has the opposite problem of decay: It is comprehensive but complex ( Crowder 1976 ). The basic principles are straightforward. Items in memory compete, with the amount of interference determined by the similarity, number, and strength of the competitors. The complexity stems from the fact that interference may occur at multiple stages (encoding, retrieval, and possibly storage) and at multiple levels (the representation itself or its association with a cue or a response). Interference from the past (proactive interference; PI) may affect both the encoding and the retrieval of new items, and it often increases over time. By contrast, interference from new items onto older memories (retroactive interference; RI) frequently decreases over time and may not be as reliant on similarity (see discussion by Wixted 2004 ).

Below, we review some of the major findings with regard to interference in STM, including a discussion of its weaknesses in explaining short-term forgetting. We then present a conceptual model of STM that attempts to address these weaknesses and the questions regarding structure, process, and forgetting raised throughout this review.

Interference Effects in Short-Term Memory

Selection-based interference effects.

The Brown-Peterson task, originally conceived to test decay theory, became a workhorse for testing similarity-based interference as well. In the “release-from-PI” version ( Wickens 1970 ), short lists of categorized words are used as memoranda. Participants learn one three-item list on each trial, perform some other task during the retention interval, and then attempt to recall the list. For the first three trials, all lists consist of words from the same category (e.g., flowers). The typical PI effects occur: Recall declines over subsequent trials. The critical manipulation occurs at the final list. If it is from a different category (e.g., sports), recall is much higher than if it is from the same category as preceding trials. In some cases, performance on this set-shift or release from-PI trial is nearly as high as on the very first trial.

The release-from-PI effect was originally interpreted as an encoding effect. Even very subtle shifts (e.g., from “flowers” to “wild-flowers”) produce the effect if participants are warned about the shift before the words are presented (see Wickens 1970 for an explanation). However, Gardiner et al. (1972) showed that release also occurs if the shift-cue is presented only at the time of the retrieval test—i.e., after the list has been encoded. They suggested that cues at retrieval could reduce PI by differentiating items from the most recent list, thus aiding their selection.

Selection processes remain an important topic in interference research. Functional neuroimaging studies consistently identify a region in left inferior frontal gyrus (LIFG) as active during interference resolution, at least for verbal materials (see a review by Jonides & Nee 2006 ). This region appears to be generally important for selection among competing alternatives, e.g., in semantic memory as well as in STM ( Thompson-Schill et al. 1997 ). In STM, LIFG is most prominent during the test phase of interference trials, and its activation during this phase often correlates with behavioral measures of interference resolution ( D’Esposito et al. 1999 , Jonides et al. 1998 , Reuter-Lorenz et al. 2000 , Thompson-Schill et al. 2002 ). These findings attest to the importance of processes for resolving retrieval interference. The commonality of the neural substrate for interference resolution across short-term and long-term tasks provides yet further support for the hypothesis of shared retrieval processes for the two types of memory.

Interference effects occur at multiple levels, and it is important to distinguish between interference at the level of representations and interference at the level of responses. The LIFG effects described above appear to be familiarity based and to occur at the level of representations. Items on a current trial must be distinguished and selected from among items on previous trials that are familiar because of prior exposure but are currently incorrect. A separate contribution occurs at the level of responses: An item associated with a positive response on a prior trial may now be associated with a negative response, or vice versa. This response-based conflict can be separated from the familiarity-based conflict, and its resolution appears to rely more on the anterior cingulate ( Nelson et al. 2003 ).

Other mechanisms for interference effects?

Despite the early work of Keppel & Underwood (1962) , most studies examining encoding in STM have focused on RI: how new information disrupts previous memories. Early theorists described this disruption in terms of displacement of entire items from STM, perhaps by disrupting consolidation (e.g., Waugh & Norman 1965 ). However, rapid serial visual presentation studies suggest that this type of consolidation is complete within a very short time—approximately 500 milliseconds, and in some situations as short as 50 milliseconds ( Vogel et al. 2006 ).

What about interference effects beyond this time window? As reviewed above, most current focus-based models implicitly assume something like whole-item displacement is at work, but these models may need to be elaborated to account for retroactive similarity-based interference, such as the phonological interference effects reviewed by Nairne (2002) . The models of Nairne (2002) and Oberauer (2006) suggest a direction for such an elaboration. Rather than a competition at the item level for a single-focus resource, these models posit a lower-level similarity-based competition for “feature units.” By this idea, items in STM are represented as bundles of features (e.g., color, shape, spatial location, temporal location). Representations of these features in turn are distributed over multiple units. The more two items overlap, the more they compete for these feature units, resulting in greater interference. This proposed mechanism fits well with the idea that working memory reflects the heightened activation of representations that are distributed throughout sensory, semantic, and motor cortex ( Postle 2006 ), and that similarity-based interference constrains the capacity due to focusing (see above; Awh et al. 2007 ). Hence, rather than whole-item displacement, specific feature competition may underlie the majority of encoding-stage RI.

Interference-based decay?

Above, we proposed a mechanism for decay based on the idea that stochastic variability causes the neurons making up a representation to fall out of synchrony (become less coherent in their firing patterns). Using the terminology of Nairne (2002) and Oberauer (2006) , the feature units become less tightly bound. Importantly, feature units that are not part of a representation also show some random activity due to their own stochastic variability, creating a noise distribution. Over time, there is an increasing likelihood that the feature units making up the to-be-remembered item’s representation will overlap with those of the noise distribution, making them increasingly difficult to distinguish. This increasing overlap with the noise distribution and loss of feature binding could lead to the smooth forgetting functions often interpreted as evidence for decay.

Such a mechanism for decay has interesting implications. It may explain why PI effects interact with retention interval. Prior trials with similar items would structure the noise distribution so that it is no longer random but rather is biased to share components with the representation of the to-be remembered item (target). Representations of prior, now-irrelevant items might compete with the current target’s representation for control of shared feature units, increasing the likelihood (rate) at which these units fall out of synchrony.

Prior similar items may also dampen the fidelity of the target representation to begin with, weakening their initial binding and thus causing these items to fall out of synchrony more quickly. In addition, poorly learned items might have fewer differentiating feature units, and these units may be less tightly bound and therefore more vulnerable to falling out of synchrony. This could explain why Keppel & Underwood (1962) found that poorly learned items resulted in retention interval effects even on the first trial. It may also underlie the greater decay effects that McKone (1995 , 1998) found for nonwords than for words, if one assumes that non-words have fewer meaning-based units and connections.

A SUMMARY OF PRINCIPLES AND AN ILLUSTRATION OF SHORT-TERM MEMORY AT WORK

Here we summarize the principles of STM that seem best supported by the behavioral and neural evidence. Building on these principles, we offer a hypothetical sketch of the processes and neural structures that are engaged by a canonical STM task, the probe recognition task with distracting material.

Principles of Short-Term Memory

We have motivated our review by questions of structure, process, and forgetting. Rather than organize our summary this way, we wish to return here to the title of our review and consider what psychological and neural mechanisms seem best defended by empirical work. In that we have provided details about each of these issues in our main discussion, we summarize them here as bullet points. Taken together, they provide answers to our questions about structure, process, and forgetting.

The mind of short-term memory

Representations in memory are composed of bundles of features for stored information, including features representing the context in which that information was encountered.

  • ■ Representations in memory vary in activation, with a dormant state characterizing long-term memories, and varying states of activation due to recent perceptions or retrievals of those representations.
  • ■ There is a focus of attention in which a bound collection of information may be held in a state that makes it immediately available for cognitive action. Attention may be focused on only a single chunk of information at a time, where a chunk is defined as a set of items that are bound by a common functional context.
  • ■ Items may enter the focus of attention via perceptual encoding or via cue-based retrieval from LTM.
  • ■ Items are maintained in the focus via a controlled process of maintenance, with rehearsal being a case of controlled sequential allocation of attentional focus.
  • ■ Forgetting occurs when items leave the focus of attention and must compete with other items to regain the focus (interference), or when the fidelity of the representation declines over time due to stochastic processes (decay).

The brain of short-term memory

Items in the focus of attention are represented by patterns of heightened, synchronized firing of neurons in primary and secondary association cortex.

  • ■ The sensorimotor features of items in the focus of attention or those in a heightened state of activation are the same as those activated by perception or action. Information within a representation is associated with the cortical region that houses it (e.g., verbal, spatial, motor). In short, item representations are stored where they are processed.
  • ■ Medial temporal structures are important for binding items to their context for both the short- and long-term and for retrieving items whose context is no longer in the focus of attention or not yet fully consolidated in the neocortex.
  • ■ The capacity to focus attention is constrained by parietal and frontal mechanisms that modulate processing as well as by increased noise in the neural patterns arising from similarity-based interference or from stochastic variability in firing.
  • ■ Frontal structures support controlled processes of retrieval and interference resolution.
  • ■ Placing an item into the focus of attention from LTM involves reactivating the representation that is encoded in patterns of neural connection weights.
  • ■ Decay arises from the inherent variability of the neural firing of feature bundles that build a representation: The likelihood that the firing of multiple features will fall out of synchrony increases with time due to stochastic variability.

A Sketch of Short-Term Memory at Work

The theoretical principles outlined above summarize our knowledge of the psychological and neural bases of STM, but further insight can be gained by attempting to see how these mechanisms might work together, moment-by-moment, to accomplish the demands of simple tasks. We believe that working through an illustration will not only help to clarify the nature of the proposed mechanisms, but it may also lead to a picture of STM that is more detailed in its bridging of neural process and psychological function.

Toward these ends, we present here a specific implementation of the principles that allows us to give a description of the mechanisms that might be engaged at each point in a simple visual STM task. This exercise leads us to a view of STM that is heavily grounded in concepts of neural activation and plasticity. More specifically, we complement the assumptions about cognitive and brain function above with simple hypotheses about the relative supporting roles of neuronal firing and plasticity (described below). Although somewhat speculative in nature, this description is consistent with the summary principles, and it grounds the approach more completely in a plausible neural model. In particular, it has the virtue of providing an unbroken chain of biological mechanisms that supports the encoding of short-term memories over time.

Figure 1 traces the representation of one item in memory over the course of a few seconds in our hypothetical task. The cognitive events are demarcated at the top of the figure, and the task events at the bottom. In the hypothetical task, the subject must keep track of three visual items (such as novel shapes). The first item is presented for 700 milliseconds, followed by a delay of 2 seconds. The second stimulus then appears, followed by a delay of a few seconds, then the third stimulus, and another delay. Finally, the probe appears, and contact must be made with the memory for the first item. The assumption is that subjects will engage in a strategy of actively maintaining each item during the delay periods.

An external file that holds a picture, illustration, etc.
Object name is nihms566147f1.jpg

The processing and neural representation of one item in memory over the course of a few seconds in a hypothetical short-term memory task, assuming a simple single-item focus architecture. The cognitive events are demarcated at the top; the task events, at the bottom. The colored layers depict the extent to which different brain areas contribute to the representation of the item over time, at distinct functional stages of short-term memory processing. The colored layers also distinguish two basic types of neural representation: Solid layers depict memory supported by a coherent pattern of active neural firing, and hashed layers depict memory supported by changes in synaptic patterns. The example task requires processing and remembering three visual items; the figure traces the representation of the first item only. In this task, the three items are sequentially presented, and each is followed by a delay period. After the delay following the third item, a probe appears that requires retrieval of the first item. See the text for details corresponding to the numbered steps in the figure.

Before walking through the timeline in Figure 1 , let us take a high-level view. At any given time point, a vertical slice through the figure is intended to convey two key aspects of the neural basis of the memory. The first is the extent to which multiple cortical areas contribute to the representation of the item, as indicated by the colored layers corresponding to different cortical areas. The dynamic nature of the relative sizes of the layers captures several of our theoretical assumptions concerning the evolving contribution of those different areas at different functional stages of STM. The second key aspect is the distinction between memory supported by a coherent pattern of active neural firing (captured in solid layers) and memory supported by synaptic plasticity (captured in the hashed layers) ( Fuster 2003 , Grossberg 2003 , Rolls 2000 ). The simple hypothesis represented here is that perceptual encoding and active-focus maintenance are supported by neuronal firing, and memory of items outside the focus is supported by short-term synaptic plasticity ( Zucker & Regehr 2002 ). 3

We now follow the time course of the neural representation of the first item (in the order indicated by the numbers in the figure). ( 1 ) The stimulus is presented and rapidly triggers a coherent pattern of activity in posterior perceptual regions, representing both low-level visual features of the item content and its abstract identification in higher-level regions. ( 2 ) There is also a rapid onset of the representation of item-context binding (temporal context in our example) supported by the medial-temporal lobes (see section titled “Contesting the Idea of Separate Long-Term and Short-Term Systems”) ( Ranganath & Blumenfeld 2005 ). ( 3 ) Over the first few hundred milliseconds, this pattern increases in quality, yielding speed-accuracy tradeoffs in perceptual identification. ( 4 ) Concurrent with the active firing driven by the stimulus, very short-term synaptic plasticity across cortical areas begins to encode the item’s features and its binding to context. Zucker & Regehr (2002) identify at least three distinct plasticity mechanisms that begin to operate on this time scale (tens of milliseconds) and that together are sufficient to produce memories lasting several seconds. (For the use of this mechanism in a prominent neural network model of STM, see Burgess & Hitch 1999 , 2005 , 2006 .) ( 5 ) At the offset of the stimulus, the active firing pattern decays very rapidly (consistent with identified mechanisms of rapid decay in short-term potentiation; Zucker & Regehr 2002 ), but ( 6 ) active maintenance, mediated by increased activity in frontal and parietal systems, maintains the firing pattern during the delay period (see sections titled “The Architecture of Unitary-Store Models” and “Maintenance of Items in the Focus”) ( Pasternak & Greenlee 2005 , Ranganath 2006 , Ruchkin et al. 2003 ). This active delay firing includes sustained contribution of MTL to item-context binding (see section titled “Contesting the Idea of Separate Long-Term and Short-Term Systems”). Significant reduction in coherence of the firing pattern may occur as a result of stochastic drift as outlined above (in sections titled “What Happens Neurally During the Delay?” and “Interference-Based Decay?”), possibly leading to a kind of short-term decay during maintenance (see section titled “What Happens Neurally During the Delay?”) ( Fuster 1995 , Pasternak & Greenlee 2005 ). ( 7 ) The active maintenance involves the reuse of posterior perceptual regions in the service of the task demands on STM. This reuse includes even early perceptual areas, but we show here a drop in the contribution of primary perceptual regions to maintenance in order to indicate a relatively greater effect of top-down control on the later high-level regions ( Postle 2006 , Ranganath 2006 ). ( 8 ) During this delay period of active maintenance, short-term potentiation continues to lay down a trace of the item and its binding to context via connection weights both within and across cortical regions. The overall efficacy of this memory encoding is the result of the interaction of the possibly decaying active firing pattern with the multiple plasticity mechanisms and their individual facilitation and depression profiles ( Zucker & Regehr 2002 ).

( 9 ) At the end of the delay period and the onset of the second stimulus, the focus rapidly shifts to the new stimulus, and the active firing of the neural pattern of the target stimulus ceases. ( 10 ) The memory of the item is now carried completely by the changed synaptic weights, but this change is partially disrupted by the incoming item and its engagement of a similar set of neural activity patterns. Cognitively, this disruption yields similarity-based retroactive interference (see “Other Mechanisms for Interference Effects?”) ( Nairne 2002 ). ( 11 ) Even in the absence of interference, a variety of biochemical processes give rise to the decay of short-term neural change and therefore the gradual loss of the memory trace over time. This pattern of interference and decay continues during processing of both the second and third stimulus. The probe triggers a rapid memory retrieval of the target item ( 12 ), mediated in part by strategic frontal control (see “Neural Mechanisms of Short- and Long-Term Memory Retrieval”) ( Cabeza et al. 2002 , Ranganath et al. 2004 ). This rapid retrieval corresponds to the reinstantiation of the target item’s firing pattern in both posterior perceptual areas ( 13 ) and medial-temporal regions, the latter supporting the contextual binding. A plausible neural mechanism for the recovery of this activity pattern at retrieval is the emergent pattern-completion property of attractor networks ( Hopfield 1982 ). Attractor networks depend on memories encoded in a pattern of connection weights, whose formation and dynamics we have sketched above in terms of short-term synaptic plasticity. Such networks also naturally give rise to the kind of similarity-based proactive interference clearly evident in STM retrieval (see “Selection-Based Interference Effects”) ( Jonides & Nee 2006 , Keppel & Underwood 1962 ).

We have intentionally left underspecified a precise quantitative interpretation of the y -axis in Figure 1 . Psychologically, it perhaps corresponds to a combination of availability (largely driven by the dichotomous nature of the focus state) and accessibility (driven by a combination of both firing and plasticity). Neurally, it perhaps corresponds to some measure of both firing amplitude and coherence and potential firing amplitude and coherence.

We are clearly a long way from generating something like the plot in Figure 1 from neuroimaging data on actual tasks—though plots of event-related potentials in STM tasks give us an idea of what these data may look like ( Ruchkin et al. 2003 ). There no doubt is more missing from Figure 1 than is included (e.g., the role of subcortical structures such as the basal ganglia in the frontal/parietal mediated control, or the reciprocal cortical-thalamic circuits that shape the nature of the neocortical patterns).We nevertheless believe that the time course sketched in Figure 1 is useful for making clear many of the central properties that characterize the psychological and neural theory of human STM outlined above: ( a ) STM engages essentially all cortical areas—including medial temporal lobes—and does so from the earliest moments, though it engages these areas differentially at different functional stages. ( b ) STM reuses the same posterior cortical areas and representations that subserve perception, and active maintenance of these representations depends on these posterior areas receiving input from frontal-parietal circuits. ( c ) Focused items are distinguished both functionally and neurally by active firing patterns, and nonfocused memories depend on synaptic potentiation and thereby suffer from decay and retroactive interference. ( d ) Nonfocused memories are reinstantiated into active firing states via an associative retrieval process subject to proactive interference from similarly encoded patterns.

Postscript: Revisiting Complex Cognition

A major goal of this review has been to bring together psychological theorizing (the mind) and neuroscientific evidence (the brain) of STM. However, any celebration of this union is premature until we address this question: Can our account explain how the mind and brain accomplish the everyday tasks (e.g., completing a tax form) that opened this review? The recognition probe task used in our example and the other procedures discussed throughout the main text are considerably simpler than those everyday tasks. Is it plausible to believe that the system outlined here, particularly in light of its severely limited capacity, could support human cognition in the wild?

It is sobering to note that Broadbent (1993) and Newell (1973 , 1990) asked this question nearly two decades ago, and at that time they were considering models of STM with even larger capacities than the one advocated here. Even so, both observed that none of the extant computational models of complex cognitive tasks (e.g., the Newell & Simon 1972 models of problem solving) used contemporary psychological theories of STM. Instead, the complex-cognition models assumed much larger (in some cases, effectively unlimited) working memories. The functional viability of the STM theories of that time was thus never clearly demonstrated. Since then, estimates of STM capacity have only grown smaller, so the question, it would seem, has grown correspondingly more pressing.

Fortunately, cognitive modeling and cognitive theory have also developed over that time, and in ways that would have pleased both Broadbent and Newell. Importantly, many computational cognitive architectures now make assumptions about STM capacity that are congruent with the STM models discussed in this review. The most prominent example is ACT-R, a descendent of the early Newell production-system models. ACT-R continues to serve as the basis of computational models of problem solving (e.g., Anderson & Douglass 2001 ), sentence processing ( Lewis & Vasishth 2005 , Lewis et al. 2006 ), and complex interactive tasks ( Anderson et al. 2004 ). However, the current version of ACT-R has a focus-based structure with an effective capacity limit of four or fewer items ( Anderson et al. 2004 ).

Another important theoretical development is the long-term working memory approach of Ericsson & Kintsch (1995) . This approach describes how LTM, using the kind of fast-encoding and cue-based associative retrieval processes assumed here, can support a variety of complex cognitive tasks ranging from discourse comprehension to specialized expert performance. In both the modern approaches to computational architecture and long-term working memory, the power of cognition resides not in capacious short-term buffers but rather in the effective use of an associative LTM. A sharply limited focus of attention does not, after all, seem to pose insurmountable functional problems.

In summary, this review describes the still-developing convergence of computational models of complex cognition, neural network models of simple memory tasks, modern psychological studies of STM, and neural studies of memory in both humans and primates. The points of contact among these different methods of studying STM have multiplied over the past several years. As we have pointed out, significant and exciting challenges in furthering this integration lie ahead.

1 Another line of neural evidence about the separability of short- and long-term memory comes from electrophysiological studies of animals engaged in short-term memory tasks. We review this evidence and its interpretation in The Architecture of Unitary-Store Models section.

2 This carving up of STM processes is also consistent with recent approaches to individual differences in working memory, which characterize individual variation not in terms of variation in buffer capacity, but rather in variation in maintenance and retrieval processes ( Unsworth & Engle 2007 ).

3 The alternative to this strong claim is that memory items outside the focus might also be supported by residual active firing. The empirical results reviewed above indicating load-dependent posterior activation might lend support to this alternative if one assumes that the memory load in those experiments was not entirely held in the focus, and that these activations exclusively index firing associated with the memory load itself.

LITERATURE CITED

  • Altmann EM, Gray WD. Forgetting to remember: the functional relationship of decay and interference. Psychol. Sci. 2002; 13 (1):27–33. [ PubMed ] [ Google Scholar ]
  • Anderson JR. Retrieval of information from long-term memory. Science. 1983; 220 (4592):25–30. [ PubMed ] [ Google Scholar ]
  • Anderson JR, Bothell D, Byrne MD, Douglass S, Lebiere C, Qin Y. An integrated theory of mind. Psychol. Rev. 2004; 111 :1036–1060. [ PubMed ] [ Google Scholar ]
  • Anderson JR, Douglass S. Tower of Hanoi: evidence for the cost of goal retrieval. J. Exp. Psychol.: Learn. Mem. Cogn. 2001; 27 :1331–1346. [ PubMed ] [ Google Scholar ]
  • Anderson JR, Matessa M. A production system theory of serial memory. Psychol. Rev. 1997; 104 (4):728–748. [ Google Scholar ]
  • Anderson JR, Schooler LJ. Reflections of the environment in memory. Psychol. Sci. 1991; 2 (6):396–408. [ Google Scholar ]
  • Atkinson RC, Shiffrin RM. The control of short-term memory. Sci. Am. 1971; 224 :82–90. [ PubMed ] [ Google Scholar ]
  • Awh E, Barton B, Vogel EK. Visual working memory represents a fixed number of items regardless of complexity. Psychol. Sci. 2007; 18 (7):622–628. [ PubMed ] [ Google Scholar ]
  • Awh E, Jonides J. Overlapping mechanisms of attention and spatial working memory. Trends Cogn. Sci. 2001; 5 (3):119–126. [ PubMed ] [ Google Scholar ]
  • Awh E, Jonides J, Reuter-Lorenz PA. Rehearsal in spatial working memory. J. Exp. Psychol.: Hum. Percept. Perform. 1998; 24 :780–790. [ PubMed ] [ Google Scholar ]
  • Awh E, Jonides J, Smith EE, Buxton RB, Frank LR, et al. Rehearsal in spatial working memory: evidence from neuroimaging. Psychol. Sci. 1999; 10 (5):433–437. [ Google Scholar ]
  • Awh E, Jonides J, Smith EE, Schumacher EH, Koeppe RA, Katz S. Dissociation of storage and rehearsal in verbal working memory: evidence from PET. Psychol. Sci. 1996; 7 :25–31. [ Google Scholar ]
  • Baddeley AD. Working Memory. Oxford: Clarendon; 1986. [ Google Scholar ]
  • Baddeley AD. Working memory. Science. 1992; 225 :556–559. [ PubMed ] [ Google Scholar ]
  • Baddeley AD. The episodic buffer: a new component of working memory? Trends Cogn. Sci. 2000; 4 (11):417–423. [ PubMed ] [ Google Scholar ]
  • Baddeley AD. Working memory: looking back and looking forward. Nat. Rev. Neurosci. 2003; 4 (10):829–839. [ PubMed ] [ Google Scholar ]
  • Baddeley AD, Hitch G. Working memory. In: Bower GA, editor. Recent Advances in Learning and Motivation. Vol. 8. New York: Academic; 1974. pp. 47–90. [ Google Scholar ]
  • Baddeley AD, Thomson N, Buchanan M. Word length and structure of short-term memory. J. Verbal Learn. Verbal Behav. 1975; 14 (6):575–589. [ Google Scholar ]
  • Baddeley AD, Vallar G, Wilson BA. Sentence comprehension and phonological working memory: some neuropsychological evidence. In: Coltheart M, editor. Attention and Performance XII: The Psychology of Reading. London: Erlbaum; 1987. pp. 509–529. [ Google Scholar ]
  • Baddeley AD, Warrington EK. Amnesia and the distinction between long- and short-term memory. J. Verbal Learn. Verbal Behav. 1970; 9 :176–189. [ Google Scholar ]
  • Baddeley AD, Wilson BA. Prose recall and amnesia: implications for the structure of working memory. Neuropsychologia. 2002; 40 :1737–1743. [ PubMed ] [ Google Scholar ]
  • Badre D, Wagner AD. Frontal lobe mechanisms that resolve proactive interference. Cereb. Cortex. 2005; 15 :2003–2012. [ PubMed ] [ Google Scholar ]
  • Braver TS, Barch DM, Kelley WM, Buckner RL, Cohen NJ, et al. Direct comparison of prefrontal cortex regions engaged by working and long-term memory tasks. Neuroimage. 2001; 14 :48–59. [ PubMed ] [ Google Scholar ]
  • Broadbent D. Comparison with human experiments. In: Broadbent D, editor. The Simulation of Human Intelligence. Oxford: Blackwell Sci; 1993. pp. 198–217. [ Google Scholar ]
  • Brooks LR. Spatial and verbal components of the act of recall. Can. J. Psychol. 1968; 22 :349–368. [ Google Scholar ]
  • Brown GDA, Preece T, Hulme C. Oscillator-based memory for serial order. Psychol. Rev. 2000; 107 (1):127–181. [ PubMed ] [ Google Scholar ]
  • Buckner RL, Koutstaal W, Schacter DL, Wagner AD, Rosen BR. Functional-anatomic study of episodic retrieval using fMRI: I. Retrieval effort versus retrieval success. NeuroImage. 1998; 7 (3):151–162. [ PubMed ] [ Google Scholar ]
  • Buffalo EA, Reber PJ, Squire LR. The human perirhinal cortex and recognition memory. Hippocampus. 1998; 8 :330–339. [ PubMed ] [ Google Scholar ]
  • Burgess N, Hitch GJ. Memory for serial order: a network model of the phonological loop and its timing. Psychol. Rev. 1999; 106 (3):551–581. [ Google Scholar ]
  • Burgess N, Hitch GJ. Computational models of working memory: putting long-term memory into context. Trends Cogn. Sci. 2005; 9 :535–541. [ PubMed ] [ Google Scholar ]
  • Burgess N, Hitch GJ. A revised model of short-term memory and long-term learning of verbal sequences. J. Mem. Lang. 2006; 55 :627–652. [ Google Scholar ]
  • Cabeza R, Dolcos F, Graham R, Nyberg L. Similarities and differences in the neural correlates of episodic memory retrieval and working memory. Neuroimage. 2002; 16 :317–330. [ PubMed ] [ Google Scholar ]
  • Cabeza R, Nyberg L. Imaging cognition II: an empirical review of 275 PET and fMRI studies. J. Cogn. Neurosci. 2000; 9 :254–265. [ PubMed ] [ Google Scholar ]
  • Cave CB, Squire LR. Intact verbal and nonverbal short-term memory following damage to the human hippocampus. Hippocampus. 1992; 2 :151–163. [ PubMed ] [ Google Scholar ]
  • Cowan N. Evolving conceptions of memory storage, selective attention, and their mutual constraints within the human information processing system. Psychol. Bull. 1988; 104 :163–191. [ PubMed ] [ Google Scholar ]
  • Cowan N. Attention and Memory: An Integrated Framework. New York: Oxford Univ. Press; 1995. [ Google Scholar ]
  • Cowan N. The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behav. Brain Sci. 2000; 24 :87–185. [ PubMed ] [ Google Scholar ]
  • Crowder R. Principles of Learning and Memory. Hillsdale, NJ: Erlbaum; 1976. [ Google Scholar ]
  • Damasio AR. Time-locked multiregional retroactivation: a system-level proposal for the neuronal substrates of recall and recognition. Cognition. 1989; 33 :25–62. [ PubMed ] [ Google Scholar ]
  • Darwin CJ, Turvey MT, Crowder RG. Auditory analogue of Sperling partial report procedure—evidence for brief auditory storage. Cogn. Psychol. 1972; 3 (2):255–267. [ Google Scholar ]
  • Deiber MP, Missonnier P, Bertrand O, Gold G, Fazio-Costa L, et al. Distinction between perceptual and attentional processing in working memory tasks: a study of phase-locked and induced oscillatory brain dynamics. J. Cogn. Neurosci. 2007; 19 (1):158–172. [ PubMed ] [ Google Scholar ]
  • den Heyer K, Barrett B. Selective loss of visual and verbal information in STM by means of visual and verbal interpolated tasks. Psychon. Sci. 1971; 25 :100–102. [ Google Scholar ]
  • D’Esposito M, Postle BR. The dependence of span and delayed-response performance on prefrontal cortex. Neuropsychologia. 1999; 37 (11):1303–1315. [ PubMed ] [ Google Scholar ]
  • D’Esposito M, Postle BR. Neural correlates of processes contributing to working memory function: evidence from neuropsychological and pharmacological studies. In: Monsell S, Driver J, editors. Control of Cognitive Processes. Cambridge, MA: MIT Press; 2000. pp. 580–602. [ Google Scholar ]
  • D’Esposito M, Postle BR, Jonides J, Smith EE, Lease J. The neural substrate and temporal dynamics of interference effects in working memory as revealed by event-related fMRI. Proc. Natl. Acad. Sci. USA. 1999; 96 :7514–7519. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Eng HY, Chen DY, Jiang YH. Visual working memory for simple and complex visual stimuli. Psychon. Bull. Rev. 2005; 12 :1127–1133. [ PubMed ] [ Google Scholar ]
  • Ericsson KA, Kintsch W. Long-term working memory. Psychol. Rev. 1995; 102 :211–245. [ PubMed ] [ Google Scholar ]
  • Fletcher PC, Henson RNA. Frontal lobes and human memory—insights from functional neuroimaging. Brain. 2001; 124 :849–881. [ PubMed ] [ Google Scholar ]
  • Frank MJ, Loughry B, O’Reilly RC. Interactions between the frontal cortex and basal ganglia in working memory: a computational model. Cogn. Affect. Behav. Neurosci. 2001; 1 :137–160. [ PubMed ] [ Google Scholar ]
  • Funahashi S, Bruce CJ, Goldman-Rakic PS. Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. J. Neurophysiol. 1989; 61 :331–349. [ PubMed ] [ Google Scholar ]
  • Fuster JK. Thoughts from the long-term memory chair. Behav. Brain Sci. 2003; 26 :734–735. [ Google Scholar ]
  • Fuster JM. Unit activity in prefrontal cortex during delayed response performance: neuronal correlates of transient memory. J. Neurophysiol. 1973; 36 :61–78. [ PubMed ] [ Google Scholar ]
  • Fuster JM. Memory in the Cerebral Cortex. Cambridge, MA: MIT Press; 1995. [ Google Scholar ]
  • Gabrieli JDE, Brewer JB, Desmond JE, Glover GH. Separate neural bases of two fundamental memory processes in the human medial temporal lobe. Science. 1997; 276 :264–266. [ PubMed ] [ Google Scholar ]
  • Garavan H. Serial attention within working memory. Mem. Cogn. 1998; 26 :263–276. [ PubMed ] [ Google Scholar ]
  • Gardiner JM, Craik FIM, Birtwist J. Retrieval cues and release from proactive inhibition. J. Verbal Learn. Verbal Behav. 1972; 11 (6):778–783. [ Google Scholar ]
  • Gillund G, Shiffrin RM. A retrieval model for both recognition and recall. Psychol. Rev. 1984; 91 (1):1–67. [ PubMed ] [ Google Scholar ]
  • Goldman-Rakic PS. Circuitry of primate pre-frontal cortex and regulation of behavior by representational memory. In: Plum F, editor. Handbook of Physiology: The Nervous System. Vol. 5. Bethesda, MD: 1987. pp. 373–417. Am. Physiol. Soc. [ Google Scholar ]
  • Grossberg S. From working memory to long-term memory and back: linked but distinct. Behav. Brain Sci. 2003; 26 :737–738. [ Google Scholar ]
  • Hampson M, Driesen NR, Skudlarski P, Gore JC, Constable RT. Brain connectivity related to working memory performance. J. Neurosci. 2006; 26 (51):13338–13343. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hanley JR, Young AW, Pearson NA. Impairment of the visuo-spatial sketch pad. Q. J. Exp. Psychol. Hum. Exp. Psychol. 1991; 43 :101–125. [ PubMed ] [ Google Scholar ]
  • Hannula DE, Tranel D, Cohen NJ. The long and the short of it: relational memory impairments in amnesia, even at short lags. J. Neurosci. 2006; 26 (32):8352–8359. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Harris JD. The decline of pitch discrimination with time. J. Exp. Psychol. 1952; 43 (2):96–99. [ PubMed ] [ Google Scholar ]
  • Hasher L, Zacks RT. Working memory, comprehension, and aging: a review and a new view. In: Bower GH, editor. The Psychology of Learning and Motivation. Vol. 22. New York: Academic; 1988. pp. 193–225. [ Google Scholar ]
  • Hebb DO. The Organization of Behavior. New York: Wiley; 1949. [ Google Scholar ]
  • Hockley WE. Analysis of response-time distributions in the study of cognitive-processes. J. Exp. Psychol.: Learn. Mem. Cogn. 1984; 10 (4):598–615. [ Google Scholar ]
  • Holdstock JS, Shaw C, Aggleton JP. The performance of amnesic subjects on tests of delayed matching-to-sample and delayed matching-to-position. Neuropsychologia. 1995; 33 :1583–1596. [ PubMed ] [ Google Scholar ]
  • Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA. 1982; 79 (8):2554–2558. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Jacobsen CF. The functions of the frontal association areas in monkeys. Comp. Psychol. Monogr. 1936; 13 :1–60. [ Google Scholar ]
  • James W. Principles of Psychology. New York: Henry Holt; 1890. [ Google Scholar ]
  • Jensen O. Maintenance of multiple working memory items by temporal segmentation. Neuroscience. 2006; 139 :237–249. [ PubMed ] [ Google Scholar ]
  • Jha AP, McCarthy G. The influence of memory load upon delay-interval activity in a working-memory task: an event-related functional MRI study. J. Cogn. Neurosci. 2000; 12 :90–105. [ PubMed ] [ Google Scholar ]
  • Johnson MK, Raye CL, Mitchell KJ, Greene EJ, Cunningham WA, Sanislow CA. Using fMRI to investigate a component process of reflection: prefrontal correlates of refreshing a just-activated representation. Cogn. Affect. Behav. Neurosci. 2005; 5 :339–361. [ PubMed ] [ Google Scholar ]
  • Jonides J, Lacey SC, Nee DE. Processes of working memory in mind and brain. Curr. Dir. Psychol. Sci. 2005; 14 :2–5. [ Google Scholar ]
  • Jonides J, Nee DE. Brain mechanisms of proactive interference in working memory. Neuroscience. 2006; 139 :181–193. [ PubMed ] [ Google Scholar ]
  • Jonides J, Smith EE, Koeppe RA, Awh E, Minoshima S, Mintun MA. Spatial working memory in humans as revealed by PET. Nature. 1993; 363 :623–625. [ PubMed ] [ Google Scholar ]
  • Jonides J, Smith EE, Marshuetz C, Koeppe RA, Reuter-Lorenz PA. Inhibition in verbal working memory revealed by brain activation. Proc. Natl. Acad. Sci. USA. 1998; 95 :8410–8413. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Keppel G, Underwood BJ. Proactive-inhibition in short-term retention of single items. J. Verbal Learn. Verbal Behav. 1962; 1 :153–161. [ Google Scholar ]
  • Lange EB, Oberauer K. Overwriting of phonemic features in serial recall. Memory. 2005; 13 :333–339. [ PubMed ] [ Google Scholar ]
  • Lewandowsky S, Duncan M, Brown GDA. Time does not cause forgetting in short-term serial recall. Psychon. Bull. Rev. 2004; 11 :771–790. [ PubMed ] [ Google Scholar ]
  • Lewis RL, Vasishth S. An activation-based theory of sentence processing as skilled memory retrieval. Cogn. Sci. 2005; 29 :375–419. [ PubMed ] [ Google Scholar ]
  • Lewis RL, Vasishth S, Van Dyke J. Computational principles of working memory in sentence comprehension. Trends Cogn. Sci. 2006; 10 :447–454. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lisman JE, Idiart MAP. Storage of 7+/−2 short-term memories in oscillatory subcycles. Science. 1995; 267 :1512–1515. [ PubMed ] [ Google Scholar ]
  • Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature. 1997; 390 :279–281. [ PubMed ] [ Google Scholar ]
  • Lustig C, Matell MS, Meck WH. Not “just” a coincidence: frontal-striatal interactions in working memory and interval timing. Memory. 2005; 13 :441–448. [ PubMed ] [ Google Scholar ]
  • Malmo RB. Interference factors in delayed response in monkeys after removal of frontal lobes. J. Neurophysiol. 1942; 5 :295–308. [ Google Scholar ]
  • Manoach DS, Greve DN, Lindgren KA, Dale AM. Identifying regional activity associated with temporally separated components of working memory using event-related functional MRI. NeuroImage. 2003; 20 (3):1670–1684. [ PubMed ] [ Google Scholar ]
  • Martin RC. Short-term memory and sentence processing: evidence from neuropsychology. Mem. Cogn. 1993; 21 :176–183. [ PubMed ] [ Google Scholar ]
  • McClelland JL, McNaughton BL, O’Reilly RC. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 1995; 102 :419–457. [ PubMed ] [ Google Scholar ]
  • McElree B. Attended and nonattended states in working memory: accessing categorized structures. J. Mem. Lang. 1998; 38 :225–252. [ Google Scholar ]
  • McElree B. Working memory and focal attention. J. Exp. Psychol.: Learn. Mem. Cogn. 2001; 27 :817–835. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • McElree B. Accessing recent events. Psychol. Learn. Motiv. 2006; 46 :155–200. [ Google Scholar ]
  • McElree B, Dosher BA. Serial position and set size in short-term memory: time course of recognition. J. Exp. Psychol.: Gen. 1989; 118 :346–373. [ Google Scholar ]
  • McGeoch J. Forgetting and the law of disuse. Psychol. Rev. 1932; 39 :352–370. [ Google Scholar ]
  • McKone E. Short-term implicit memory for words and non-words. J. Exp. Psychol.: Learn. Mem. Cogn. 1995; 21 (5):1108–1126. [ Google Scholar ]
  • McKone E. The decay of short-term implicit memory: unpacking lag. Mem. Cogn. 1998; 26 (6):1173–1186. [ PubMed ] [ Google Scholar ]
  • Meyer DE, Kieras DE. A computational theory of executive cognitive processes and multiple-task performance: 1. Basic mechanisms. Psychol. Rev. 1997; 104 (1):3–65. [ PubMed ] [ Google Scholar ]
  • Miller GA. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 1956; 63 :81–97. [ PubMed ] [ Google Scholar ]
  • Milner PM. Magical attention. Behav. Brain Sci. 2001; 24 (1):131. [ Google Scholar ]
  • Miyashita Y, Chang HS. Neuronal correlate of pictorial short-term memory in the primate temporal cortex. Nature. 1968; 331 :68–70. [ PubMed ] [ Google Scholar ]
  • Monsell S. Recency, immediate recognition memory, and reaction-time. Cogn. Psychol. 1978; 10 (4):465–501. [ Google Scholar ]
  • Murdock BB. A theory for the storage and retrieval of item and associative information. Psychol. Rev. 1982; 89 (6):609–626. [ PubMed ] [ Google Scholar ]
  • Nairne JS. Remembering over the short-term: the case against the standard model. Annu. Rev. Psychol. 2002; 53 :53–81. [ PubMed ] [ Google Scholar ]
  • Neath I, Nairne JS. Word-length effects in immediate memory: overwriting trace decay theory. Psychon. Bull. Rev. 1995; 2 :429–441. [ PubMed ] [ Google Scholar ]
  • Nelson JK, Reuter-Lorenz PA, Sylvester CYC, Jonides J, Smith EE. Dissociable neural mechanisms underlying response-based and familiarity-based conflict in working memory. Proc. Natl. Acad. Sci. USA. 2003; 100 :11171–11175. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Newell A. You can’t play 20 questions with nature and win: projective comments on the papers of this symposium. In: Chase WG, editor. Visual Information Processing; Academic; New York. 1973. pp. 283–310. [ Google Scholar ]
  • Newell A. Unified Theories of Cognition. Cambridge, MA: Harvard Univ. Press; 1990. [ Google Scholar ]
  • Newell A, Simon H. Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall; 1972. [ Google Scholar ]
  • Nichols EA, Kao Y-C, Verfaellie M, Gabrieli JDE. Working memory and long-term memory for faces: evidence from fMRI and global amnesia for involvement of the medial temporal lobes. Hippocampus. 2006; 16 :604–616. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Oberauer K. Access to information in working memory: exploring the focus of attention. J. Exp. Psychol.: Learn. Mem. Cogn. 2002; 28 :411–421. [ PubMed ] [ Google Scholar ]
  • Oberauer K. Is the focus of attention in working memory expanded through practice? J. Exp. Psychol.: Learn. Mem. Cogn. 2006; 32 :197–214. [ PubMed ] [ Google Scholar ]
  • Oberauer K, Kliegl R. A formal model of capacity limits in working memory. J. Mem. Lang. 2006; 55 :601–626. [ Google Scholar ]
  • Olson IR, Moore KS, Stark M, Chatterjee A. Visual working memory is impaired when the medial temporal lobe is damaged. J. Cogn. Neurosci. 2006a; 18 :1087–1097. [ PubMed ] [ Google Scholar ]
  • Olson IR, Page K, Moore KS, Chatterjee A, Verfaellie M. Working memory for conjunctions relies on the medial temporal lobe. J. Neurosci. 2006b; 26 :4596–4601. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Owen AM, Sahakian BJ, Semple J, Polkey CE, Robbins TW. Visuo-spatial short-term recognition memory and learning after temporal lobe excisions, frontal lobe excisions or amygdala-hippocampectomy in man. Neuropsychologia. 1995; 33 :1–24. [ PubMed ] [ Google Scholar ]
  • Pashler H. Familiarity and visual change detection. Percept. Psychophys. 1988; 44 :369–378. [ PubMed ] [ Google Scholar ]
  • Pasternak T, Greenlee MW. Working memory in primate sensory systems. Nat. Rev. Neurosci. 2005; 6 :97–107. [ PubMed ] [ Google Scholar ]
  • Polk TA, Simen P, Lewis RL, Freedman E. A computational approach to control in complex cognition. Cogn. Brain Res. 2002; 15 (1):71–83. [ PubMed ] [ Google Scholar ]
  • Postle BR. Working memory as an emergent property of the mind and brain. Neuroscience. 2006; 139 :23–38. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Postle BR, D’Esposito M. “What”—then—“where” in visual working memory: an event-related, fMRI study. J. Cogn. Neurosci. 1999; 11 (6):585–597. [ PubMed ] [ Google Scholar ]
  • Postman L. Extra-experimental interference and retention of words. J. Exp. Psychol. 1961; 61 (2):97–110. [ PubMed ] [ Google Scholar ]
  • Prabhakaran V, Narayanan ZZ, Gabrieli JDE. Integration of diverse information in working memory within the frontal lobe. Nat. Neurosci. 2000; 3 :85–90. [ PubMed ] [ Google Scholar ]
  • Pylyshyn ZW. Some primitive mechanisms of spatial attention. Cognition. 1994; 50 :363–384. [ PubMed ] [ Google Scholar ]
  • Pylyshyn ZW, Burkell J, Fisher B, Sears C, Schmidt W, Trick L. Multiple parallel access in visual-attention. Can. J. Exp. Psychol. Rev. Can. Psychol. Exp. 1994; 48 (2):260–283. [ PubMed ] [ Google Scholar ]
  • Raichle ME, MacLeod AM, Snyder AZ, Powers WJ, Gusnard DA, Shulman GL. A default mode of brain function. Proc. Natl. Acad. Sci. USA. 2001; 98 :676–682. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ranganath C. Working memory for visual objects: complementary roles of inferior temporal, medial temporal, and prefrontal cortex. Neuroscience. 2006; 139 :277–289. [ PubMed ] [ Google Scholar ]
  • Ranganath C, Blumenfeld RS. Doubts about double dissociations between short- and long-term memory. Trends Cogn. Sci. 2005; 9 :374–380. [ PubMed ] [ Google Scholar ]
  • Ranganath C, DeGutis J, D’Esposito M. Category-specific modulation of inferior temporal activity during working memory encoding and maintenance. Cogn. Brain Res. 2004; 20 :37–45. [ PubMed ] [ Google Scholar ]
  • Ranganath C, D’Esposito M. Medial temporal lobe activity associated with active maintenance of novel information. Neuron. 2001; 31 :865–873. [ PubMed ] [ Google Scholar ]
  • Ranganath C, D’Esposito M. Directing the mind’s eye: prefrontal, inferior and medial temporal mechanisms for visual working memory. Curr. Opin. Neurobiol. 2005; 15 :175–182. [ PubMed ] [ Google Scholar ]
  • Ranganath C, Johnson MK, D’Esposito M. Prefrontal activity associated with working memory and episodic long-term memory. Neuropsychologia. 2003; 41 (3):378–389. [ PubMed ] [ Google Scholar ]
  • Renart A, Parga N, Rolls ET. Backward projections in the cerebral cortex: implications for memory storage. Neural Comput. 1999; 11 (6):1349–1388. [ PubMed ] [ Google Scholar ]
  • Repov G, Baddeley AD. The multi-component model of working memory: explorations in experimental cognitive psychology. Neuroscience. 2006; 139 :5–21. [ PubMed ] [ Google Scholar ]
  • Reuter-Lorenz PA, Jonides J. The executive is central to working memory: insights from age performance and task variations. In: Conway AR, Jarrold C, Kane MJ, Miyake A, Towse JN, editors. Variations in Working Memory. London/New York: Oxford Univ. Press: 2007. pp. 250–270. [ Google Scholar ]
  • Reuter-Lorenz PA, Jonides J, Smith EE, Hartley A, Miller A, et al. Age differences in the frontal lateralization of verbal and spatial working memory revealed by PET. J. Cogn. Neurosci. 2000; 12 :174–187. [ PubMed ] [ Google Scholar ]
  • Roediger HL, Knight JL, Kantowitz BH. Inferring decay in short-term-memory—the issue of capacity. Mem. Cogn. 1977; 5 (2):167–176. [ PubMed ] [ Google Scholar ]
  • Rolls ET. Memory systems in the brain. Annu. Rev. Psychol. 2000; 51 :599–630. [ PubMed ] [ Google Scholar ]
  • Rougier NP, Noelle DC, Braver TS, Cohen JD, O’Reilly RC. Prefrontal cortex and flexible cognitive control: rules without symbols. Proc. Natl. Acad. Sci. USA. 2005; 102 (20):7338–7343. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ruchkin DS, Grafman J, Cameron K, Berndt RS. Working memory retention systems: a state of activated long-term memory. Behav. Brain Sci. 2003; 26 :709–777. [ PubMed ] [ Google Scholar ]
  • Sakai K. Reactivation of memory: role of medial temporal lobe and prefrontal cortex. Rev. Neurosci. 2003; 14 (3):241–252. [ PubMed ] [ Google Scholar ]
  • Schubert T, Frensch PA. How unitary is the capacity-limited attentional focus? Behav. Brain Sci. 2001; 24 (1):146. [ Google Scholar ]
  • Scoville WB, Milner B. Loss of recent memory after bilateral hippocampal lesions. J. Neurol. Neurosurg. Psychiatry. 1957; 20 :11–21. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Shallice T, Warrington EK. Independent functioning of verbal memory stores: a neuropsychological study. Q. J. Exp. Psychol. 1970; 22 :261–273. [ PubMed ] [ Google Scholar ]
  • Smith EE, Jonides J. Working memory: a view from neuroimaging. Cogn. Psychol. 1997; 33 :5–42. [ PubMed ] [ Google Scholar ]
  • Smith EE, Jonides J. Neuroscience—storage and executive processes in the frontal lobes. Science. 1999; 283 :1657–1661. [ PubMed ] [ Google Scholar ]
  • Smith EE, Jonides J, Koeppe RA, Awh E, Schumacher EH, Minoshima S. Spatial vs object working-memory: PET investigations. J. Cogn. Neurosci. 1995; 7 :337–356. [ PubMed ] [ Google Scholar ]
  • Sperling G. The information available in brief visual presentations. Psychol. Monogr. 1960; 74 Whole No. 498. [ Google Scholar ]
  • Squire L. Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans. Psychol. Rev. 1992; 99 :195–231. [ PubMed ] [ Google Scholar ]
  • Sternberg S. High-speed scanning in human memory. Science. 1966; 153 :652–654. [ PubMed ] [ Google Scholar ]
  • Talmi D, Grady CL, Goshen-Gottstein Y, Moscovitch M. Neuroimaging the serial position curve. Psychol. Sci. 2005; 16 :716–723. [ PubMed ] [ Google Scholar ]
  • Thompson-Schill SL, D’Esposito M, Aguirre GK, Farah MJ. Role of left inferior prefrontal cortex in retrieval of semantic knowledge: a reevaluation. Proc. Natl. Acad. Sci. USA. 1997; 94 :14792–14797. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Thompson-Schill SL, Jonides J, Marshuetz C, Smith EE, D’Esposito M, et al. Effects of frontal lobe damage on interference effects in working memory. J. Cogn. Affect. Behav. Neurosci. 2002; 2 :109–120. [ PubMed ] [ Google Scholar ]
  • Todd JJ, Marois R. Capacity limit of visual short-term memory in human posterior parietal cortex. Nature. 2004; 428 (6984):751–754. [ PubMed ] [ Google Scholar ]
  • Todd JJ, Marois R. Posterior parietal cortex activity predicts individual differences in visual short-term memory capacity. Cogn. Affect. Behav. Neurosci. 2005; 5 :144–155. [ PubMed ] [ Google Scholar ]
  • Trick LM, Pylyshyn ZW. What enumeration studies can show us about spatial attention—evidence for limited capacity preattentive processing. J. Exp. Psychol.: Hum. Percept. Perform. 1993; 19 (2):331–351. [ PubMed ] [ Google Scholar ]
  • Ungerleider LG, Haxby JV. “What” and “where” in the human brain. Curr. Opin. Neurobiol. 1994; 4 :157–165. [ PubMed ] [ Google Scholar ]
  • Unsworth N, Engle RW. The nature of individual differences in working memory capacity: active maintenance in primary memory and controlled search from secondary memory. Psychol. Rev. 2007; 114 :104–132. [ PubMed ] [ Google Scholar ]
  • Vallar G, Baddeley AD. Fractionation of working memory: neuropsychological evidence for a phonological short-term store. J. Verbal Learn. Verbal Behav. 1984; 23 :151–161. [ Google Scholar ]
  • Vallar G, Papagno C. Neuropsychological impairments of verbal short-term memory. In: Baddeley AD, Kopelman MD, Wilson BA, editors. The Handbook of Memory Disorders. 2nd ed. Chichester, UK: Wiley; 2002. pp. 249–270. [ Google Scholar ]
  • Verhaeghen P, Basak C. Aging and switching of the focus of attention in working memory: results from a modified N-Back task. Q. J. Exp. Psychol. A. 2007 In press. [ PubMed ] [ Google Scholar ]
  • Verhaeghen P, Cerella J, Basak C. A working memory workout: how to expand the focus of serial attention from one to four items in 10 hours or less. J. Exp. Psychol.: Learn. Mem. Cogn. 2004; 30 :1322–1337. [ PubMed ] [ Google Scholar ]
  • Vogel EK, Machizawa MG. Neural activity predicts individual differences in visual working memory capacity. Nature. 2004; 426 :748–751. [ PubMed ] [ Google Scholar ]
  • Vogel EK, Woodman GF, Luck SJ. The time course of consolidation in visual working memory. J. Exp. Psychol.: Hum. Percept. Perform. 2006; 32 :1436–1451. [ PubMed ] [ Google Scholar ]
  • Wager TD, Smith EE. Neuroimaging studies of working memory: a meta-analysis. Neuroimage. 2003; 3 :255–274. [ PubMed ] [ Google Scholar ]
  • Warrington EK, Shallice T. The selective impairment of auditory verbal short-term memory. Brain. 1969; 92 :885–896. [ PubMed ] [ Google Scholar ]
  • Waugh NC, Norman DA. Primary memory. Psychol. Rev. 1965; 72 :89–104. [ PubMed ] [ Google Scholar ]
  • Wheeler ME, Peterson SE, Buckner RL. Memory’s echo: vivid remembering reactivates sensory-specific cortex. Proc. Natl. Acad. Sci. USA. 2000; 97 (20):11125–11129. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wheeler ME, Shulman GL, Buckner RL, Miezin FM, Velanova K, Petersen SE. Evidence for separate perceptual reactivation and search processes during remembering. Cereb. Cortex. 2006; 16 (7):949–959. [ PubMed ] [ Google Scholar ]
  • Wickens DD. Encoding categories of words—empirical approach to meaning. Psychol. Rev. 1970; 77 :1–15. [ Google Scholar ]
  • Wilken P, Ma WJ. A detection theory account of change detection. J. Vis. 2004; 4 :1120–1135. [ PubMed ] [ Google Scholar ]
  • Wilson FAW, O’Scalaidhe SP, Goldman-Rakic PS. Dissociation of object and spatial processing domains in primate prefrontal cortex. Science. 1993; 260 :1955–1958. [ PubMed ] [ Google Scholar ]
  • Wixted JT. The psychology and neuroscience of forgetting. Annu. Rev. Psychol. 2004; 55 :235–269. [ PubMed ] [ Google Scholar ]
  • Woodman GF, Vogel EK, Luck SJ. Attention is not unitary. Behav. Brain Sci. 2001; 24 (1):153. [ Google Scholar ]
  • Xu YD, Chun MM. Dissociable neural mechanisms supporting visual short-term memory for objects. Nature. 2006; 440 :91–95. [ PubMed ] [ Google Scholar ]
  • Yantis S, Serences JT. Cortical mechanisms of space-based and object-based attentional control. Curr. Opin. Neurobiol. 2003; 13 :187–193. [ PubMed ] [ Google Scholar ]
  • Zhang D, Zhang X, Sun X, Li Z, Wang Z, et al. Cross-modal temporal order memory for auditory digits and visual locations: an fMRI study. Hum. Brain Mapp. 2004; 22 :280–289. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Zucker RS, Regehr WG. Short-term synaptic plasticity. Annu. Rev. Physiol. 2002; 64 :355–405. [ PubMed ] [ Google Scholar ]

COMMENTS

  1. (PDF) Computer Memory, Applications and Management

    Computer memory. operates at a high speed, for example andomaccess memory (RAM), as a distinction from storage that. provides slow -to-access program and data storage but offers higher capacities ...

  2. A full spectrum of computing-in-memory technologies

    These advantages have created various CIM research directions aimed at delivering computer performance ... P. E. et al. The Programmable Logic-in-Memory (PLiM) computer. In 2016 Design ...

  3. New Memory Research Teases 100x Density Jump, Merged Compute and Memory

    A 10 to 100 times storage density jump? We'll take that as soon as possible, please. New research along the frontiers of materials engineering holds promise for a truly astounding performance ...

  4. Memory and computation together at last

    Memory and computation together at last. Nature Computational Science 3 , 912 ( 2023) Cite this article. Traditionally, a modern computer consists of at least one processing element, where the ...

  5. 4694 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on COMPUTER MEMORY. Find methods information, sources, references or conduct a literature review on ...

  6. A Study on Modeling and Optimization of Memory Systems

    Accesses Per Cycle (APC), Concurrent Average Memory Access Time (C-AMAT), and Layered Performance Matching (LPM) are three memory performance models that consider both data locality and memory assess concurrency. The APC model measures the throughput of a memory architecture and therefore reflects the quality of service (QoS) of a memory system. The C-AMAT model provides a recursive expression ...

  7. Memory devices and applications for in-memory computing

    In Proc. Conference on Computer Vision and Pattern Recognition (CVPR) 770-778 ... E. et al. Deep learning acceleration based on in-memory computing. IBM Journal of Research and Development (2019).

  8. An Overview of Computer Memory Systems and Emerging Trends

    An Overview of Computer Memory Systems and Emerging Trends. October 2023. American Journal of Electrical and Computer Engineering 7 (2):19-26. DOI: 10.11648/j.ajece.20230702.11. Authors:

  9. Phase-shifting computer memory could herald the next ...

    That's because materials scientists in China have recently found a way to speed up—by more than a factor of 10—so-called phase-change random access memory (PCRAM), which can hold onto information even when your computer's power is off. For decades, computers have been getting smaller, faster, and cheaper. But advances in memory technologies ...

  10. Cognitive neuroscience perspective on memory: overview and summary

    Working memory. Working memory is primarily associated with the prefrontal and posterior parietal cortex (Sarnthein et al., 1998; Todd and Marois, 2005).Working memory is not localized to a single brain region, and research suggests that it is an emergent property arising from functional interactions between the prefrontal cortex (PFC) and the rest of the brain (D'Esposito, 2007).

  11. An Overview of Computer Memory Systems and Emerging Trends

    Additionally, the report outlines prospective research goals and avenues for computer memory systems research. Keywords: Non-Volatile Memory (NVM), Quantum Memory, Neuromorphic Memory, Computer Memory System, Memory Hierarchy, 3D XPoint, Resistive RAM (ReRAM), Persistent Memory (PMEM) 1. Introduction Any physical component that can store data ...

  12. Journal of Applied Research in Memory and Cognition

    The Journal of Applied Research in Memory and Cognition (JARMAC) publishes the highest-quality applied research in memory and cognition, in the format of empirical reports, review articles, and target papers with invited peer commentary. The goal of this unique journal is to reach psychological scientists and other researchers working in this field and related areas, as well as professionals ...

  13. The Development of Working Memory

    Fig. 1. Simulations of a dynamic field model showing an increase in working memory (WM) capacity over development from infancy (left column) through childhood (middle column) and into adulthood (right column) as the strength of neural interactions is increased. The graphs in the top row (a, d, g) show how activation ( z -axis) evolves through ...

  14. Mechanisms of memory: An intermediate level of analysis and

    Research in the last five years has made great strides toward mechanistic explanations of how the brain enables memory. This progress builds upon decades of research from two complementary strands: a Levels of Analysis approach and a Levels of Organization approach. We review how research in cognitive psychology and cognitive neuroscience under these two approaches has recently converged on ...

  15. Memory Studies: Sage Journals

    Memory Studies affords recognition, form and direction to work in this nascent field, and provides a peer-reviewed, critical forum for dialogue and debate on the theoretical, empirical, and methodological issues central to a collaborative understanding of memory today.Memory Studies examines the social, cultural, cognitive, political and technological shifts affecting how, what and why ...

  16. A compute-in-memory chip based on resistive random-access memory

    Compute-in-memory (CIM) based on resistive random-access memory (RRAM) 1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by ...

  17. Memory: Neurobiological mechanisms and assessment

    Memory is the process of retaining of knowledge over a period for the function of affecting future actions.[] From a historical standpoint, the area of memory research from 1870 to 1920 was focused mainly on human memory.[] The book: The Principles of Psychology written by famous psychologist William James suggested that there is a difference between memory and habit.[]

  18. 4637 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on COMPUTER MEMORY. Find methods information, sources, references or conduct a literature review on ...

  19. Focus on learning and memory

    In this special issue of Nature Neuroscience, we feature an assortment of reviews and perspectives that explore the topic of learning and memory. Learning new information and skills, storing this ...

  20. Memory: An Extended Definition

    In contrast, "memory" now is used to refer to storage of information in general, including in DNA, digital information storage, and neuro-chemical processes. Today, science has moved far beyond a popular understanding of memory as fixed, subjective, and personal. In the extended definition, it is simply the capacity to store and retrieve ...

  21. How Scientists Use Webcams to Track Human Gaze

    Scientists measure eye movements to understand what people remember and pay attention to, how people read, and even to screen for certain disorders. An eye tracker is a camera that takes pictures of a person's eyes [ 1 ]. Eye trackers study information from these pictures (like the shape of the pupils) to pinpoint where a person is looking.

  22. Super Micro Computer (SMCI) Unveils IoT and Embedded Systems

    Zacks Equity Research April 09, 2024. Better trading starts here. INTC - Free Report) Atom and Core Central Processing Units (CPUs). In addition, Super Micro Computer launched SYS-E100-14AM and ...

  23. The neurobiological foundation of memory retrieval

    Abstract. Memory retrieval involves the interaction between external sensory or internally generated cues and stored memory traces (or engrams) in a process termed 'ecphory'. While ecphory has been examined in human cognitive neuroscience research, its neurobiological foundation is less understood. To the extent that ecphory involves ...

  24. 4 Duke CS Students Receive 2024 NSF Graduate Research Fellowships

    April 10, 2024. Four Duke CS students received NSF Graduate Research Fellowships: Jonathan Donnelly, who worked with Cynthia Rudin and will pursue a PhD in Machine Learning at Duke. Jabari Kwesi worked with Pardis Emami-Naeini and will pursue a PhD in Human Computer Interaction at Duke. Megan Richards is a recent Duke ECE-CS grad who plans to ...

  25. COMPUTER VS HUMAN BRAIN: AN ANALYTICAL APPROACH AND OVERVIEW

    1. Introduction. The human brain is like a power ful computer tha t stores. our memory and controls how we as humans think and. react. It has evolved over time and features some. incredibly ...

  26. Doctoral Student Receives NSF Graduate Research Fellowship

    Congratulations to Cole Dickerson, just named a 2024 recipient of the NSF Graduate Research Fellowship, supporting his work on unmanned aerial platforms with AERPAW. April 8, 2024 Charles Hall. Cole Dickerson, an electrical engineering Ph.D. student advised by Ismail Guvenc, professor of electrical and computer engineering, has been awarded a ...

  27. The Mind and Brain of Short-Term Memory

    First, we examine the evidence for the architecture of short-term memory, with special attention to questions of capacity and how—or whether—short-term memory can be separated from long-term memory. Second, we ask how the components of that architecture enact processes of encoding, maintenance, and retrieval. Third, we describe the debate ...