A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 143

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 143
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 209
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 994
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3134
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

A connectionist architecture for view-independent grip-aperture computation. | LitMetric

AI Article Synopsis

  • This paper discusses a model for extracting view-invariant visual features essential for recognizing actions directed at objects, specifically focusing on reach-to-grasp motions.
  • It introduces NeGOI, a neural network designed to measure grip aperture independent of the viewer's perspective, utilizing a small number of predefined hand shapes to assess hand movement.
  • The NeGOI architecture aligns with existing models of the brain's ventral visual stream and aims to simplify the identification of key visual features for object interactions, making it relevant for understanding mirror neuron systems.

Article Abstract

This paper addresses the problem of extracting view-invariant visual features for the recognition of object-directed actions and introduces a computational model of how these visual features are processed in the brain. In particular, in the test-bed setting of reach-to-grasp actions, grip aperture is identified as a good candidate for inclusion into a parsimonious set of hand high-level features describing overall hand movement during reach-to-grasp actions. The computational model NeGOI (neural network architecture for measuring grip aperture in an observer-independent way) for extracting grip aperture in a view-independent fashion was developed on the basis of functional hypotheses about cortical areas that are involved in visual processing. An assumption built into NeGOI is that grip aperture can be measured from the superposition of a small number of prototypical hand shapes corresponding to predefined grip-aperture sizes. The key idea underlying the NeGOI model is to introduce view-independent units (VIP units) that are selective for prototypical hand shapes, and to integrate the output of VIP units in order to compute grip aperture. The distinguishing traits of the NEGOI architecture are discussed together with results of tests concerning its view-independence and grip-aperture recognition properties. The overall functional organization of NEGOI model is shown to be coherent with current functional models of the ventral visual stream, up to and including temporal area STS. Finally, the functional role of the NeGOI model is examined from the perspective of a biologically plausible architecture which provides a parsimonious set of high-level and view-independent visual features as input to mirror systems.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brainres.2008.04.076DOI Listing

Publication Analysis

Top Keywords

grip aperture
20
visual features
12
negoi model
12
computational model
8
reach-to-grasp actions
8
parsimonious set
8
prototypical hand
8
hand shapes
8
vip units
8
negoi
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!