SlideShare une entreprise Scribd logo
1  sur  18
Télécharger pour lire hors ligne
2. THE SMORG NEUROPHYSIOLOGY LABORATORY

Introduction

        The SensoriMotor Research Group (SMoRG) was founded at Arizona State University in

2006 to investigate sensorimotor learning and representations in the nervous system, as well as

the neural mechanisms that enable fine motor skills. At its inception, total SMoRG assets

included people, ideas and a profoundly empty laboratory space in which to combine them to

produce meaningful science. This chapter will describe the process of developing the neural

recording laboratory, where the experimental work of this manuscript was accomplished. A

description of this work is fitting because it featured significant technical accomplishments,

produced a novel experimental facility and required a sustained effort of more than two years to

complete. The overall goal was clear, even if the path to achieve it was not; develop an

experimental facility that included a robot arm, 3D motion capture, virtual reality simulation, a

cortical neural recording system and custom software to integrate it all.

Robot Arm

        Installation. A six-axis industrial robot (model VS-6556G, Denso Robotics, Long Beach,

CA, USA) was acquired for object presentation during behavioral experimental tasks (Figure

8(a)). The very first task required fabrication of a platform on which to mount the robot in a

secure yet mobile way. A space frame cube was assembled from extruded aluminum segments

(1530 T-slotted series, 80/20® Inc., Columbia City, IN, USA) with bolted corner gussets for

maximum structural integrity. The top and bottom faces of the cube were covered with a single

piece of 0.25 in. thick plate steel to which the base of the robot was attached with stainless steel

bolts. The entire robot platform was supported by swivel joint mounting feet at the corners and

rested on a 0.5 in. thick rubber pad to dampen the vibration and inertial loads resulting from robot

movement.
44

Figure 8. Robot and Associated Hardware. A. The 6-axis industrial robot was mounted on a

sturdy platform and controlled using custom software. Dedicated signal and air channels routed

through the robot enabled feedback from a 6-DOF F/T sensor, object touch sensors and control of

a pneumatic tool changer. B. The robot end effector. The F/T sensor (b2) was mounted directly to

the robot end effector (b1). The master plate of the tool changer (b3) was mounted to the F/T

sensor using a custom interface plate. Air lines originating from ports on the robot controlled the

locking mechanism of the master plate. C. Grasp object assembly. The object was mounted to a

six-inch standoff post that mounted to a tool plate. Touch sensors were mounted flush with the

object surface and wires were routed to the object interior for protection. Power and signal lines

were routed through a pass-through connector (not visible), through the robot interior to an

external connector on the robot base. Small felt discs on each sensor were used for grasp training.

The large flange extending from the bottom of the object was a temporary training aide to guide

the subject’s hand to the correct location.
45


A         b3
               B

    b2


     b1




               C
46

        Programming. The robot included a tethered teach pendant interface device through

which basic simple operation of the robot could be accomplished either through direct control of

a specific axis, or by executing a script written in the PAL programming language. The

behavioral task planned for our experiments required real-time programmatic control of robot

actions, requiring the development of custom software routines using a software development kit

(SDK) provided by the manufacturer (ORiN-II, Denso Robotics). The routines implemented basic

movement commands to pre-defined poses in the working space. Pose coordinates (position,

rotation, pose type) were determined by manually driving the robot to a desired pose using the

teach pendant, then using motor encoder data to read back the actual coordinates. Programmatic

control included the ability to select the desired pose, speed, acceleration and other secondary

movement parameters. For selected operations involving a stereotyped sequence of basic

movements (retrieving or replacing grasp objects) individual commands were grouped into

compound movements to simplify user programming and operation.

        The custom software routines were developed in the C++ programming language and

compiled into a library of functions accessed by code interface modules in the LabVIEW®

graphical programming environment (National Instruments Corporation, Austin, TX, USA). An

intuitive graphical user interface (GUI) was developed in LabVIEW® allowing the user to easily

operate the robot from a computer connected to the robot controller through the local network.

        Safety Measures. Errors in robot operation were capable of causing considerable damage

to the experimental setup, including the robot itself. To mitigate this possibility, robot control

programs developed in LabVIEW® actively monitored force and torque data acquired from a 6-

axis force/torque (F/T) sensor (Mini85, ATI Industrial Automation, Inc., Apex, NC, USA)

mounted on the robot end effector (Figure 8(b)). Maximum force and torque limits for each

movement were specified, tailored to purely inertial loads during movement or to direct loading
47

during object retrieval, replacement and behavioral manipulation. The robot was immediately

halted if these limits were exceeded. The addition of the F/T sensor also added the capability of

monitoring kinetics of object manipulation for scientific analysis.

        Tool Changer. A pneumatic tool changer (QC-11, ATI Industrial Automation Inc.) was

the final element of the overall robot system (Figure 8(b)). This enabled the robot to retrieve

presentation objects from the tool holder mounted to the side of the robot platform. A master

plate was mounted directly to the force/torque sensor via a custom interface plate and connected

to compressed air lines, which operated the locking mechanism. The air lines connected directly

to dedicated air channels routed through the interior of the robot, which was supplied by a gas

cylinder mounted nearby. Internal solenoid valves in the robot were controlled programmatically

(via LabVIEW®) to operate the tool changer during object retrieval and replacement. A tool plate

was attached to each grasp object to interface with the master plate. Each tool plate was fitted

with four mounting posts that aligned the tool in the object holder for reliable and repeatable

object retrieval.

Grasp Objects

        Object Design. Grasp objects were designed to elicit in the experimental subject a

variety of hand postures in order to investigate the sensory feedback resulting from each. Initially,

up to seven different objects were envisioned including simple polygonal shapes (cylinder,

rectangular polygon, convex polygon and concave polygon) as well as objects requiring specific

manipulation (e.g., squeezing, pulling, etc.) for successful task completion. The behavioral task

used for the research described in this manuscript required just two objects; small and large

versions of a modified cylinder design. An early version of the small object used for training is

shown in Figure 8(c).
48

        Initially, grasp objects were machined out of solid polymer materials such as

polytetrafluoroethylene (Teflon®, DuPont) or polyacetal (Delrin®, DuPont). However, fabrication

quickly shifted to stereolithography (rapid prototyping) techniques to speed production and

reduce cost during numerous design iterations. The resulting prototype objects proved to be

sufficiently robust to withstand the rigors of repeated use. The modified cylinder design was

developed primarily in response to the need to register precise finger placement during grasping

using surface mounted resistive touch sensors (TouchMini v1.2, Infusion Systems, Montreal,

Quebec, Canada). Simple cylindrical designs could not balance the size of the object (cylinder

diameter, which drove hand aperture) with the need for a relatively planar surface on which to

attach the touch sensors. Mounting the flexible sensors on a curved surface would have

introduced an undesired bias into the output, which was modulated by deformation or bending of

the sensor. The solution was to essentially unfold the surface of a cylinder into an extended

surface whose center portion was curved to accept the palm of the hand, while the peripheral

portions merged into a relatively flat surface. These complex shapes were perfectly suited to the

stereolithography process, and had the added benefit of opening up additional space in the interior

of the object that was used to route and protect delicate electrical connections from wear and tear.

        Touch Sensors. Thin (0.04 in, 1 mm), circular (∅0.75 in, 19 mm) resistive touch sensors

were glued directly to the outer surface of the object in shallow indentations that perfectly

matched the thickness and diameter of the sensor. This prevented the monkeys from picking at

the edges since the surface appeared to be uniform except for a slight change in texture. At the

center of each indentation was a deeper well that permitted further indentation of the flexible

sensor, which increased the magnitude and reliability of the output in comparison to mounting on

a flat surface. Wires were immediately routed inside of the object for protection. Sensors were

placed at locations where the distal phalange of the thumb, index and middle fingers contacted the
49

object surface when a prototype version was pressed into the hand of the first monkey to be

trained in the behavioral task (monkey F). Electrical connections were routed to a 10-pin pass-

through connector on the tool plate that made electrical connections to a corresponding connector

when an object was retrieved by the tool changer. Signals were routed through dedicated lines

inside the robot and emerged at a master connector on the robot base. From here, the signals were

routed to the behavioral control software (LabVIEW®) and actively monitored to indicate

successful object grasping.

        Grasp Training. The F/T sensor and touch sensors were excellent tools for training

monkeys to grasp the objects in a specific and repeatable way. The desired interaction was a

precision grip in which the distal phalange of the thumb, index and middle digits contacted the

object at the location of the sensors and maintained simultaneous supra-threshold contact for at

least 250 ms.

        Basic Interaction. The first training stage was to establish the connection between the

object and reward. Touch sensor feedback was not used during this stage. Instead, feedback from

the F/T sensor was used to register physical interaction with an object presented directly ahead of

the monkey. Any contact with the object was immediately rewarded with several drops of juice.

Initially, these interactions often involved slapping or scratching the object. This behavior was

steadily eliminated by withholding reward (and an audible cue) when such actions resulted in

excessive force or torque levels. The basic interaction training stage was complete when the

monkey had learned to consistently place its hand on the object without exceeding force and

torque thresholds.

        Fine Tuning. This stage involved training the monkey to place the thumb, index and

middle digits directly on the touch sensors. F/T feedback was not used to register successful

interaction, rather, only to detect excessive force applied to the object. In this case, the audible
50

cue was played, the object was withdrawn from the workspace and no reward was given. Small

felt discs approximately 2 mm in height were attached to each touch sensor to attract the

monkey’s attention during haptic exploration of the object. Initially, brief (10 ms) contact with

any of the three sensors was sufficient to earn a juice reward. Next, brief simultaneous contact

with any two sensors was sufficient then, finally, contact with all three sensors was required to

earn the juice reward. The final step was to steadily increase the required grasp duration to 250

ms.

Motion Capture

        Our experiments required that the 3D position and orientation of the subject’s hand were

captured at all times. This information served two primary functions. First, it was used to animate

the motion of hand and object models in a virtual reality simulation in which subjects would

eventually be trained to carry out the behavioral task. Second, the data were used to reconstruct

the kinematics of hand movement during the task, which could be correlated with simultaneously

recorded neural activity.

        Kinematic analysis of hand movement is a technically challenging undertaking,

especially for the hand and even more so for the child-sized hand of the juvenile macaques used

in this research. Detailed reconstructions require tracking the orientation of individual digit

segments (implying two markers per segment) with millimeter precision. Markers attached to the

segments are often occluded by the movement of adjacent digits or by intervening experimental

apparatus. Active markers require power and signal connections, which quickly becomes a

logistical challenge of routing wires and connections while minimizing the impact to the

underlying behavioral task.

        Approach #1: Passive Marker Motion Capture. The first approach was to implement a

camera-based motion capture system using passive detection markers (Vitrius, Tenetec
51

Innovations AG, Zürich, Switzerland). In theory, this approach offered several advantages for

mitigating the challenges of motion capture described above. First, passive markers required no

power or signal lines, thus eliminating a significant degree of logistical complexity and increasing

reliability. Second, the Vitrius system was predicated on a unique approach that promised to

dramatically reduce the number of cameras and markers required for high-precision motion

capture; 3D position determination with just one camera and one marker. All other known

camera-based motion capture systems ultimately derive 3D position from an estimation of the

parallax between two distributed observations of a point in space. By contrast, the Vitrius system

calculated position by estimating the linear distance between the camera focal plane and a flat,

square marker of known size. The relationship was simple; the smaller the marker’s focal plane

representation (pixel area), the further its distance along a ray extending from the center of the

detected area. The trajectory of the ray itself was determined by the optical properties of the lens

and the orientation of the camera. A unique pixelated pattern on each maker was used for

identification. An example of a Vitrius marker is shown in Figure 9(d), where several individual

markers have been attached to the faces of a cube-shaped base.

        Numerous shortcomings of the Vitrius system quickly became apparent. At best, position

accuracy was 10-20 mm, an order of magnitude greater than the required value. Camera

resolution was insufficient to adequately capture the small (3 mm2) markers required for the

monkey hand. When markers were viewed by the cameras at any angle other than perpendicular

to the focal plane, the apparent decrease in detected marker area resulted in an accompanying

over-estimation of marker distance. The system had no integrated calibration procedure, requiring

the user to manually measure the position and orientation of each camera. Finally, the Vitrius

software was poorly designed and implemented, resulting in frequent system crashes and loss of

data.
52

        Approach #2: Data Gloves. Given the substantial shortcomings of the Vitrius system,

effort quickly shifted to developing non-camera based methods for capturing hand posture. One

option was to adapt a data glove (Figure 9(a)) for use on a monkey hand. Developed for

applications such as virtual reality simulations, video gaming, and animation, data gloves are

outfitted with an array of sensors to capture hand motion and return real-time joint angle data.

Gloves normally include up to three resistive bend sensors per digit (spanning each joint),

abduction/adduction sensors between digits and, in some cases, palm bend sensors and wrist

angle sensors. Typically, these systems do not measure 3D position, requiring the addition of a

motion tracking system to the dorsal surface of the hand or forearm.

        The advantages of this approach were promising, yet significant challenges remained.

First, the lack of position and orientation sensing implied that motion tracking could not be

abandoned completely. Second, the cost of a typical data glove was prohibitive, especially

considering the potential wear and tear when used in non-human primate research. Neither would

any manufacturer even consider the possibility of customizing the glove for the monkey hand.

Finally, a glove covering the hand would interfere with the basic research goals of investigating

cutaneous feedback during reaching and grasping.

        The first solution was to personally customize an inexpensive data glove, combined with

Vitrius motion capture for position and orientation sensing (Figure 9(b)). This approach

combined the benefits of a data glove while minimizing the use of the Vitrius system. Bend

sensors were removed from the original glove and reassembled into the new Monkey Glove,

where they were restrained within an inner pocket on the dorsal aspect of each digit. Electronics

(wires, circuit boards) were encased in epoxy for protection and sewn into a pocket on the hand

dorsum. An array of three Vitrius markers was also mounted on the hand dorsum to track the

orientation of the hand. Initially, the glove fingers were attached to the digits using only narrow
53

loops of fabric at the intermediate and distal phalangeal joints in order to expose the skin that

would come into contact with the grasp objects. Eventually, however, the entire finger was

removed from the glove and bend sensors were held loosely in place using thin plastic fasteners at

the aforementioned joints.

        Numerous refinements to this approach were devised, including the addition of a wireless

transmitter, and a 2-axis accelerometer for measuring pitch and roll (Figure 9(c)). Despite these

improvements, numerous problems plagued this approach. The Vitrius system was still required

for position tracking and the estimation of finger posture from a single bend sensor was

inaccurate and unreliable. A strategy was developed to utilize larger (5 mm2) Vitrius markers

attached to the faces of finger-mounted cubes to improve camera visibility (Figure 9(d)). This

approach used fewer markers and was able to capture only crude measures of hand posture.
54

Figure 9. Approaches to Hand Posture Measurement. A. Commercially available data gloves

feature numerous integrated bend sensors to capture the posture of the digits and palm but were

prohibitively expensive and difficult to customize to the monkey hand. B. An early prototype of

the custom Monkey Glove. Bend sensors and electronics were removed from a gaming glove and

reconfigured to the monkey hand. C. A wireless version of the Monkey Glove with rechargeable

battery, accelerometer and transmitter encased in protective epoxy. D. An alternative strategy for

passive motion capture. Cube markers with finger attachment clips were developed to utilize

larger markers for improved camera visibility. A smaller set of these markers captured only crude

measures of hand posture.
55

A   B   C




    D
56

        Approach #3: Active Marker Motion Capture. Ultimately, the solution to the

motion capture dilemma was to implement an active marker motion capture system (Impulse

System, Phasespace Inc., San Leandro, CA, USA). The Impulse system used active LED markers

and eight cameras equipped with linear sensors, each with a digitally-enhanced effective

resolution of 900 megapixels, to capture marker positions at frame rates up to 480 Hz. A robust

calibration routine used a linear wand outfitted with several active markers to define the capture

space by systematically sweeping the wand through the field of view of the cameras. A single

marker was glued directly to the nail of each digit and an array of three markers was placed on

the hand dorsum for tracking the position and orientation of the overall hand. Each finger marker

and the dorsal array were encased in epoxy for protection and a single wire was routed along the

arm to a nearby device that transmitted data wirelessly to a server computer outside the testing

room. Data were acquired in real-time through a network interface using custom software

developed in C++ using an SDK from the system manufacturer. Data were simultaneously saved

to a file for later analysis and routed to the virtual reality simulation, running on a stand-alone

computer, to animate the position and posture of a virtual hand model. The data required no

filtering and no perceptible time lag was observed due to network transmission delays.

Virtual Reality Simulation

        Simulation Hardware. The virtual reality simulation provided all visual cues to the

monkeys during the behavioral task. It was displayed on a flat screen 3D monitor (SeeReal

Technologies S.A., Luxembourg) mounted horizontally and directly above the seating area.

Subjects were not required to wear anaglyphic glasses. The monitor generated a 3D screen image

by vertically interlacing distinct left and right eye images, then projecting each through a beam

splitter to the appropriate eye. This system did require the subject to maintain position in a sweet

spot to produce the optimal 3D effect, which was easily accomplished since the subject’s head
57

was restrained throughout the course of an experiment. A mirror was located four inches in

front of the monkey at a 45° angle to reflect the screen image from the monitor. The mirror

extended down to approximately chin level, allowing subjects to use the arms freely in the

workspace while at the same time hiding it from view. The simulation was generated using a

dedicated computer to ensure that the computational load did not effect the operation of the

Master Control Program (MCP), which was implemented in LabVIEW® on a separate computer.

The MCP continuously read current motion capture data from the network, computed kinematic

parameters, then transmitted them to the simulation computer (via User Datagram Protocol, UDP)

which used the parameters to animate a virtual hand model.

        Virtual Modeling. The software implementation used a software toolkit (Vizard,

WorldViz LLC, Santa Barbara, CA, USA) based on the Python (Python Software Foundation

Corporation, DE, USA) programming language. The virtual hand model was a fully articulated

(all digital joints) human hand included with the software toolkit. Animated degrees of freedom

included 3D position, 3-axis rotation and grasp aperture (all digits). To animate the grasping

motion, the rotation angle of all digit joints was scaled according to the current aperture estimate

to produce a realistic representation of grasping movement.

        Virtual models of the grasp objects were generated from the same CAD models used to

design and fabricate the physical objects with stereolithography. CAD models were simply

converted to the Virtual Reality Modeling Language (VRML) format and imported into the

virtual environment, resulting in exact representations of the original objects. Virtual grasp

objects were located in the simulation environment in correspondence with physical objects

presented in the workspace. That is, the transformation from camera coordinates (millimeters,

origin at task start position) to simulation coordinates (non-dimensional) was tuned so that when

the subject made contact with a physical object in the workspace, the virtual hand intersected the
58

virtual object in the simulation.

         Individual trials of the behavioral task began when a subject placed its right hand on a 4-

in square hold pad located at mid-abdominal height. The virtual model of the hold pad was a

simple flattened cube displayed in a position corresponding to the physical hold pad. Contact with

the hold pad was monitored by a single touch sensor, identical to those used to sense finger

placement on the physical objects.

         Virtual Task Training. Training in the virtual task required subjects to carry out the

physical task, learned previously in full sight and with the aid of the F/T sensor and surface-

mounted touch sensors, using cues only from the virtual environment. In actuality, this was a

combined physical/virtual task with two variants. In the physical variant, an object was presented

in the workspace, while in the virtual variant no object was presented. In both task variants, the

appearance of grasp objects was used as a training aid. At the start of a trial, only the virtual hand

and hold pad were displayed, the latter in red. Hand contact with the physical hold pad caused the

virtual hold pad to turn green. After the required hold period, the virtual hold pad was removed,

followed by presentation of a physical object in the workspace. The corresponding virtual object

was then initially displayed in white but turned green whenever the subject physically interacted

with the object in the workspace. Interaction was again determined by either the F/T sensor or

touch sensors. These visual cues facilitated the training process for the virtual task and remained

in place throughout the course of experimentation. In the virtual task variant, collision of the

virtual hand and object models (detected by the simulation software) triggered the change in

color.

Neural Recording System

         Neurophysiological experimentation was accomplished using a 16-channel acquisition

system for amplification, filtering and recording neural activity (MAP System, Plexon Inc.,
59

Dallas, TX, USA). The MAP box itself used digital signal processing for 40 kHz (25 µs)

analog-to-digital conversion on each channel with 12-bit resolution. Included control software

provided a suite of programs for real-time spike sorting, visualization and analysis of neural

activity, all of which were run on a dedicated computer independent of the MCP. Digital Events

were generated by the MCP to mark the occurrence of significant events during an experiment.

Each event type was encoded as a unique 8-bit digital word using digital outputs from a 68-pin

terminal block (SCC-68, National Instruments Corporation) and input directly to the MAP box,

which saved spike times and digital event data (word value, time) to a single file.

System Integration

        System Architecture. The preceding sections of this chapter described the sub-systems

of the overall laboratory setup, including a 6-axis industrial robot, 6-DOF F/T sensor, tool

changer, 3D motion capture system, virtual reality simulation and a neural data acquisition

system. The LabVIEW® graphical programming environment was used to integrate these sub-

systems into a coordinated whole. Initially, all subsystems were to be controlled by software

running in a real-time operating environment (LabVIEW® Real-Time Module, National

Instruments Corporation) uploaded to a dedicated target processor (PXI-6259, National

Instruments Corporation) for deterministic performance. However, several subsystems required

the Windows® operating system (Microsoft Corporation, Redmond, WA, USA) for programmatic

control. This precluded a purely real-time application, which was based on the VxWorks

operating system (Wind River Corporation, Alameda, CA, USA). Instead, a hybrid system was

developed that coordinated the overall programmatic control of a single LabVIEW® application

between the real-time processor (the target) and a standard personal computer (the host). System

timing deferred to the target processor (1 µs resolution) and communication between the target

and host was mediated through the local network (network shared variables). Program
60

development took place on the host computer. At run time, the MCP code was uploaded to the

target computer where it was compiled and executed.

        System Operation. The MCP, which included compatible inputs (touch sensors,

incoming UDP messages) and outputs (digital events, outgoing UDP messages), was executed on

the real-time target. The robot control program ran on the host, awaiting movement commands

from the MCP according to the progression of behavioral task stages. A separate program for

monitoring F/T sensor output also ran on the host, providing continuous feedback related to

object contact and monitoring the robot arm for excessive loading conditions. The MCP

generated digital events in response to task events, which were routed directly to the MAP box of

the neural recording system. The MCP also featured a continuous loop that read data from the

motion capture server (TCP/IP protocol) and saved it to a local data file. Digital event markers

were also written to the camera data file so that kinematic data and neural data, which were saved

to different files, could later be temporally aligned by matching corresponding event markers.

Kinematic parameters derived from the camera data were sent (via UDP) to the virtual reality

simulation computer to animate the virtual hand model. UDP messages were also sent from the

simulation to the MCP to report virtual object collisions.

Contenu connexe

En vedette (15)

Selected Work Portfolio
Selected Work PortfolioSelected Work Portfolio
Selected Work Portfolio
 
GHS PTSA After Prom 2013 Presentation
GHS PTSA After Prom 2013 PresentationGHS PTSA After Prom 2013 Presentation
GHS PTSA After Prom 2013 Presentation
 
1 introduccion
1   introduccion1   introduccion
1 introduccion
 
Lizelle anne
Lizelle anneLizelle anne
Lizelle anne
 
Best friends forever
Best friends foreverBest friends forever
Best friends forever
 
How the EU takes decisions
How the EU takes decisionsHow the EU takes decisions
How the EU takes decisions
 
Montenegro
MontenegroMontenegro
Montenegro
 
Mimi
MimiMimi
Mimi
 
Austria
AustriaAustria
Austria
 
Have you washed your hands lately
Have you washed your hands latelyHave you washed your hands lately
Have you washed your hands lately
 
Christmas in austria (ii)
Christmas in austria (ii)Christmas in austria (ii)
Christmas in austria (ii)
 
Finland
FinlandFinland
Finland
 
Ncfm security markets
Ncfm security marketsNcfm security markets
Ncfm security markets
 
Chipre
Chipre Chipre
Chipre
 
Christmas in Sweden
Christmas in SwedenChristmas in Sweden
Christmas in Sweden
 

Similaire à Building the SMoRG Lab

A Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiA Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiAngela Shin
 
Design and Analysis of Robotic Rover with Gripper Arm using Embedded C
Design and Analysis of Robotic Rover with Gripper Arm using Embedded CDesign and Analysis of Robotic Rover with Gripper Arm using Embedded C
Design and Analysis of Robotic Rover with Gripper Arm using Embedded CEditor IJCATR
 
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...IOSR Journals
 
Developing a Humanoid Robot Platform
Developing a Humanoid Robot PlatformDeveloping a Humanoid Robot Platform
Developing a Humanoid Robot PlatformDr. Amarjeet Singh
 
Precision robotic assembly using attractive regions
Precision robotic assembly using attractive regionsPrecision robotic assembly using attractive regions
Precision robotic assembly using attractive regionsijmech
 
Design and development of touch screen controlled stairs climbing robot
Design and development of touch screen controlled stairs climbing robotDesign and development of touch screen controlled stairs climbing robot
Design and development of touch screen controlled stairs climbing roboteSAT Journals
 
Systems Description Document v4
Systems Description Document v4Systems Description Document v4
Systems Description Document v4Erin Nelson
 
MARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.pptMARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.ppttffttfyyf
 
MARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.pptMARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.pptAfstddrrdv
 
Redeeming of processor for cyber physical systems
Redeeming of processor for cyber physical systemsRedeeming of processor for cyber physical systems
Redeeming of processor for cyber physical systemseSAT Publishing House
 
Virtual environment for assistant mobile robot
Virtual environment for assistant mobile robotVirtual environment for assistant mobile robot
Virtual environment for assistant mobile robotIJECEIAES
 

Similaire à Building the SMoRG Lab (20)

A Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiA Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry Pi
 
Design and Analysis of Robotic Rover with Gripper Arm using Embedded C
Design and Analysis of Robotic Rover with Gripper Arm using Embedded CDesign and Analysis of Robotic Rover with Gripper Arm using Embedded C
Design and Analysis of Robotic Rover with Gripper Arm using Embedded C
 
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...
Applications Of Mems In Robotics And Bio Mems Using Psoc With Metal Detector ...
 
Automatic P2R Published Paper P1277-1283
Automatic P2R Published Paper P1277-1283Automatic P2R Published Paper P1277-1283
Automatic P2R Published Paper P1277-1283
 
LIAN_D_NRC
LIAN_D_NRCLIAN_D_NRC
LIAN_D_NRC
 
MONITORING FIXTURES OF CNC MACHINE
MONITORING FIXTURES OF CNC MACHINEMONITORING FIXTURES OF CNC MACHINE
MONITORING FIXTURES OF CNC MACHINE
 
Developing a Humanoid Robot Platform
Developing a Humanoid Robot PlatformDeveloping a Humanoid Robot Platform
Developing a Humanoid Robot Platform
 
Report
ReportReport
Report
 
E04502025030
E04502025030E04502025030
E04502025030
 
SRD Presentation
SRD PresentationSRD Presentation
SRD Presentation
 
Precision robotic assembly using attractive regions
Precision robotic assembly using attractive regionsPrecision robotic assembly using attractive regions
Precision robotic assembly using attractive regions
 
Design and development of touch screen controlled stairs climbing robot
Design and development of touch screen controlled stairs climbing robotDesign and development of touch screen controlled stairs climbing robot
Design and development of touch screen controlled stairs climbing robot
 
Systems Description Document v4
Systems Description Document v4Systems Description Document v4
Systems Description Document v4
 
MARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.pptMARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.ppt
 
Robot arm ppt
Robot arm pptRobot arm ppt
Robot arm ppt
 
MARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.pptMARK ROBOTIC ARM.ppt
MARK ROBOTIC ARM.ppt
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
 
Redeeming of processor for cyber physical systems
Redeeming of processor for cyber physical systemsRedeeming of processor for cyber physical systems
Redeeming of processor for cyber physical systems
 
Virtual environment for assistant mobile robot
Virtual environment for assistant mobile robotVirtual environment for assistant mobile robot
Virtual environment for assistant mobile robot
 

Building the SMoRG Lab

  • 1. 2. THE SMORG NEUROPHYSIOLOGY LABORATORY Introduction The SensoriMotor Research Group (SMoRG) was founded at Arizona State University in 2006 to investigate sensorimotor learning and representations in the nervous system, as well as the neural mechanisms that enable fine motor skills. At its inception, total SMoRG assets included people, ideas and a profoundly empty laboratory space in which to combine them to produce meaningful science. This chapter will describe the process of developing the neural recording laboratory, where the experimental work of this manuscript was accomplished. A description of this work is fitting because it featured significant technical accomplishments, produced a novel experimental facility and required a sustained effort of more than two years to complete. The overall goal was clear, even if the path to achieve it was not; develop an experimental facility that included a robot arm, 3D motion capture, virtual reality simulation, a cortical neural recording system and custom software to integrate it all. Robot Arm Installation. A six-axis industrial robot (model VS-6556G, Denso Robotics, Long Beach, CA, USA) was acquired for object presentation during behavioral experimental tasks (Figure 8(a)). The very first task required fabrication of a platform on which to mount the robot in a secure yet mobile way. A space frame cube was assembled from extruded aluminum segments (1530 T-slotted series, 80/20® Inc., Columbia City, IN, USA) with bolted corner gussets for maximum structural integrity. The top and bottom faces of the cube were covered with a single piece of 0.25 in. thick plate steel to which the base of the robot was attached with stainless steel bolts. The entire robot platform was supported by swivel joint mounting feet at the corners and rested on a 0.5 in. thick rubber pad to dampen the vibration and inertial loads resulting from robot movement.
  • 2. 44 Figure 8. Robot and Associated Hardware. A. The 6-axis industrial robot was mounted on a sturdy platform and controlled using custom software. Dedicated signal and air channels routed through the robot enabled feedback from a 6-DOF F/T sensor, object touch sensors and control of a pneumatic tool changer. B. The robot end effector. The F/T sensor (b2) was mounted directly to the robot end effector (b1). The master plate of the tool changer (b3) was mounted to the F/T sensor using a custom interface plate. Air lines originating from ports on the robot controlled the locking mechanism of the master plate. C. Grasp object assembly. The object was mounted to a six-inch standoff post that mounted to a tool plate. Touch sensors were mounted flush with the object surface and wires were routed to the object interior for protection. Power and signal lines were routed through a pass-through connector (not visible), through the robot interior to an external connector on the robot base. Small felt discs on each sensor were used for grasp training. The large flange extending from the bottom of the object was a temporary training aide to guide the subject’s hand to the correct location.
  • 3. 45 A b3 B b2 b1 C
  • 4. 46 Programming. The robot included a tethered teach pendant interface device through which basic simple operation of the robot could be accomplished either through direct control of a specific axis, or by executing a script written in the PAL programming language. The behavioral task planned for our experiments required real-time programmatic control of robot actions, requiring the development of custom software routines using a software development kit (SDK) provided by the manufacturer (ORiN-II, Denso Robotics). The routines implemented basic movement commands to pre-defined poses in the working space. Pose coordinates (position, rotation, pose type) were determined by manually driving the robot to a desired pose using the teach pendant, then using motor encoder data to read back the actual coordinates. Programmatic control included the ability to select the desired pose, speed, acceleration and other secondary movement parameters. For selected operations involving a stereotyped sequence of basic movements (retrieving or replacing grasp objects) individual commands were grouped into compound movements to simplify user programming and operation. The custom software routines were developed in the C++ programming language and compiled into a library of functions accessed by code interface modules in the LabVIEW® graphical programming environment (National Instruments Corporation, Austin, TX, USA). An intuitive graphical user interface (GUI) was developed in LabVIEW® allowing the user to easily operate the robot from a computer connected to the robot controller through the local network. Safety Measures. Errors in robot operation were capable of causing considerable damage to the experimental setup, including the robot itself. To mitigate this possibility, robot control programs developed in LabVIEW® actively monitored force and torque data acquired from a 6- axis force/torque (F/T) sensor (Mini85, ATI Industrial Automation, Inc., Apex, NC, USA) mounted on the robot end effector (Figure 8(b)). Maximum force and torque limits for each movement were specified, tailored to purely inertial loads during movement or to direct loading
  • 5. 47 during object retrieval, replacement and behavioral manipulation. The robot was immediately halted if these limits were exceeded. The addition of the F/T sensor also added the capability of monitoring kinetics of object manipulation for scientific analysis. Tool Changer. A pneumatic tool changer (QC-11, ATI Industrial Automation Inc.) was the final element of the overall robot system (Figure 8(b)). This enabled the robot to retrieve presentation objects from the tool holder mounted to the side of the robot platform. A master plate was mounted directly to the force/torque sensor via a custom interface plate and connected to compressed air lines, which operated the locking mechanism. The air lines connected directly to dedicated air channels routed through the interior of the robot, which was supplied by a gas cylinder mounted nearby. Internal solenoid valves in the robot were controlled programmatically (via LabVIEW®) to operate the tool changer during object retrieval and replacement. A tool plate was attached to each grasp object to interface with the master plate. Each tool plate was fitted with four mounting posts that aligned the tool in the object holder for reliable and repeatable object retrieval. Grasp Objects Object Design. Grasp objects were designed to elicit in the experimental subject a variety of hand postures in order to investigate the sensory feedback resulting from each. Initially, up to seven different objects were envisioned including simple polygonal shapes (cylinder, rectangular polygon, convex polygon and concave polygon) as well as objects requiring specific manipulation (e.g., squeezing, pulling, etc.) for successful task completion. The behavioral task used for the research described in this manuscript required just two objects; small and large versions of a modified cylinder design. An early version of the small object used for training is shown in Figure 8(c).
  • 6. 48 Initially, grasp objects were machined out of solid polymer materials such as polytetrafluoroethylene (Teflon®, DuPont) or polyacetal (Delrin®, DuPont). However, fabrication quickly shifted to stereolithography (rapid prototyping) techniques to speed production and reduce cost during numerous design iterations. The resulting prototype objects proved to be sufficiently robust to withstand the rigors of repeated use. The modified cylinder design was developed primarily in response to the need to register precise finger placement during grasping using surface mounted resistive touch sensors (TouchMini v1.2, Infusion Systems, Montreal, Quebec, Canada). Simple cylindrical designs could not balance the size of the object (cylinder diameter, which drove hand aperture) with the need for a relatively planar surface on which to attach the touch sensors. Mounting the flexible sensors on a curved surface would have introduced an undesired bias into the output, which was modulated by deformation or bending of the sensor. The solution was to essentially unfold the surface of a cylinder into an extended surface whose center portion was curved to accept the palm of the hand, while the peripheral portions merged into a relatively flat surface. These complex shapes were perfectly suited to the stereolithography process, and had the added benefit of opening up additional space in the interior of the object that was used to route and protect delicate electrical connections from wear and tear. Touch Sensors. Thin (0.04 in, 1 mm), circular (∅0.75 in, 19 mm) resistive touch sensors were glued directly to the outer surface of the object in shallow indentations that perfectly matched the thickness and diameter of the sensor. This prevented the monkeys from picking at the edges since the surface appeared to be uniform except for a slight change in texture. At the center of each indentation was a deeper well that permitted further indentation of the flexible sensor, which increased the magnitude and reliability of the output in comparison to mounting on a flat surface. Wires were immediately routed inside of the object for protection. Sensors were placed at locations where the distal phalange of the thumb, index and middle fingers contacted the
  • 7. 49 object surface when a prototype version was pressed into the hand of the first monkey to be trained in the behavioral task (monkey F). Electrical connections were routed to a 10-pin pass- through connector on the tool plate that made electrical connections to a corresponding connector when an object was retrieved by the tool changer. Signals were routed through dedicated lines inside the robot and emerged at a master connector on the robot base. From here, the signals were routed to the behavioral control software (LabVIEW®) and actively monitored to indicate successful object grasping. Grasp Training. The F/T sensor and touch sensors were excellent tools for training monkeys to grasp the objects in a specific and repeatable way. The desired interaction was a precision grip in which the distal phalange of the thumb, index and middle digits contacted the object at the location of the sensors and maintained simultaneous supra-threshold contact for at least 250 ms. Basic Interaction. The first training stage was to establish the connection between the object and reward. Touch sensor feedback was not used during this stage. Instead, feedback from the F/T sensor was used to register physical interaction with an object presented directly ahead of the monkey. Any contact with the object was immediately rewarded with several drops of juice. Initially, these interactions often involved slapping or scratching the object. This behavior was steadily eliminated by withholding reward (and an audible cue) when such actions resulted in excessive force or torque levels. The basic interaction training stage was complete when the monkey had learned to consistently place its hand on the object without exceeding force and torque thresholds. Fine Tuning. This stage involved training the monkey to place the thumb, index and middle digits directly on the touch sensors. F/T feedback was not used to register successful interaction, rather, only to detect excessive force applied to the object. In this case, the audible
  • 8. 50 cue was played, the object was withdrawn from the workspace and no reward was given. Small felt discs approximately 2 mm in height were attached to each touch sensor to attract the monkey’s attention during haptic exploration of the object. Initially, brief (10 ms) contact with any of the three sensors was sufficient to earn a juice reward. Next, brief simultaneous contact with any two sensors was sufficient then, finally, contact with all three sensors was required to earn the juice reward. The final step was to steadily increase the required grasp duration to 250 ms. Motion Capture Our experiments required that the 3D position and orientation of the subject’s hand were captured at all times. This information served two primary functions. First, it was used to animate the motion of hand and object models in a virtual reality simulation in which subjects would eventually be trained to carry out the behavioral task. Second, the data were used to reconstruct the kinematics of hand movement during the task, which could be correlated with simultaneously recorded neural activity. Kinematic analysis of hand movement is a technically challenging undertaking, especially for the hand and even more so for the child-sized hand of the juvenile macaques used in this research. Detailed reconstructions require tracking the orientation of individual digit segments (implying two markers per segment) with millimeter precision. Markers attached to the segments are often occluded by the movement of adjacent digits or by intervening experimental apparatus. Active markers require power and signal connections, which quickly becomes a logistical challenge of routing wires and connections while minimizing the impact to the underlying behavioral task. Approach #1: Passive Marker Motion Capture. The first approach was to implement a camera-based motion capture system using passive detection markers (Vitrius, Tenetec
  • 9. 51 Innovations AG, Zürich, Switzerland). In theory, this approach offered several advantages for mitigating the challenges of motion capture described above. First, passive markers required no power or signal lines, thus eliminating a significant degree of logistical complexity and increasing reliability. Second, the Vitrius system was predicated on a unique approach that promised to dramatically reduce the number of cameras and markers required for high-precision motion capture; 3D position determination with just one camera and one marker. All other known camera-based motion capture systems ultimately derive 3D position from an estimation of the parallax between two distributed observations of a point in space. By contrast, the Vitrius system calculated position by estimating the linear distance between the camera focal plane and a flat, square marker of known size. The relationship was simple; the smaller the marker’s focal plane representation (pixel area), the further its distance along a ray extending from the center of the detected area. The trajectory of the ray itself was determined by the optical properties of the lens and the orientation of the camera. A unique pixelated pattern on each maker was used for identification. An example of a Vitrius marker is shown in Figure 9(d), where several individual markers have been attached to the faces of a cube-shaped base. Numerous shortcomings of the Vitrius system quickly became apparent. At best, position accuracy was 10-20 mm, an order of magnitude greater than the required value. Camera resolution was insufficient to adequately capture the small (3 mm2) markers required for the monkey hand. When markers were viewed by the cameras at any angle other than perpendicular to the focal plane, the apparent decrease in detected marker area resulted in an accompanying over-estimation of marker distance. The system had no integrated calibration procedure, requiring the user to manually measure the position and orientation of each camera. Finally, the Vitrius software was poorly designed and implemented, resulting in frequent system crashes and loss of data.
  • 10. 52 Approach #2: Data Gloves. Given the substantial shortcomings of the Vitrius system, effort quickly shifted to developing non-camera based methods for capturing hand posture. One option was to adapt a data glove (Figure 9(a)) for use on a monkey hand. Developed for applications such as virtual reality simulations, video gaming, and animation, data gloves are outfitted with an array of sensors to capture hand motion and return real-time joint angle data. Gloves normally include up to three resistive bend sensors per digit (spanning each joint), abduction/adduction sensors between digits and, in some cases, palm bend sensors and wrist angle sensors. Typically, these systems do not measure 3D position, requiring the addition of a motion tracking system to the dorsal surface of the hand or forearm. The advantages of this approach were promising, yet significant challenges remained. First, the lack of position and orientation sensing implied that motion tracking could not be abandoned completely. Second, the cost of a typical data glove was prohibitive, especially considering the potential wear and tear when used in non-human primate research. Neither would any manufacturer even consider the possibility of customizing the glove for the monkey hand. Finally, a glove covering the hand would interfere with the basic research goals of investigating cutaneous feedback during reaching and grasping. The first solution was to personally customize an inexpensive data glove, combined with Vitrius motion capture for position and orientation sensing (Figure 9(b)). This approach combined the benefits of a data glove while minimizing the use of the Vitrius system. Bend sensors were removed from the original glove and reassembled into the new Monkey Glove, where they were restrained within an inner pocket on the dorsal aspect of each digit. Electronics (wires, circuit boards) were encased in epoxy for protection and sewn into a pocket on the hand dorsum. An array of three Vitrius markers was also mounted on the hand dorsum to track the orientation of the hand. Initially, the glove fingers were attached to the digits using only narrow
  • 11. 53 loops of fabric at the intermediate and distal phalangeal joints in order to expose the skin that would come into contact with the grasp objects. Eventually, however, the entire finger was removed from the glove and bend sensors were held loosely in place using thin plastic fasteners at the aforementioned joints. Numerous refinements to this approach were devised, including the addition of a wireless transmitter, and a 2-axis accelerometer for measuring pitch and roll (Figure 9(c)). Despite these improvements, numerous problems plagued this approach. The Vitrius system was still required for position tracking and the estimation of finger posture from a single bend sensor was inaccurate and unreliable. A strategy was developed to utilize larger (5 mm2) Vitrius markers attached to the faces of finger-mounted cubes to improve camera visibility (Figure 9(d)). This approach used fewer markers and was able to capture only crude measures of hand posture.
  • 12. 54 Figure 9. Approaches to Hand Posture Measurement. A. Commercially available data gloves feature numerous integrated bend sensors to capture the posture of the digits and palm but were prohibitively expensive and difficult to customize to the monkey hand. B. An early prototype of the custom Monkey Glove. Bend sensors and electronics were removed from a gaming glove and reconfigured to the monkey hand. C. A wireless version of the Monkey Glove with rechargeable battery, accelerometer and transmitter encased in protective epoxy. D. An alternative strategy for passive motion capture. Cube markers with finger attachment clips were developed to utilize larger markers for improved camera visibility. A smaller set of these markers captured only crude measures of hand posture.
  • 13. 55 A B C D
  • 14. 56 Approach #3: Active Marker Motion Capture. Ultimately, the solution to the motion capture dilemma was to implement an active marker motion capture system (Impulse System, Phasespace Inc., San Leandro, CA, USA). The Impulse system used active LED markers and eight cameras equipped with linear sensors, each with a digitally-enhanced effective resolution of 900 megapixels, to capture marker positions at frame rates up to 480 Hz. A robust calibration routine used a linear wand outfitted with several active markers to define the capture space by systematically sweeping the wand through the field of view of the cameras. A single marker was glued directly to the nail of each digit and an array of three markers was placed on the hand dorsum for tracking the position and orientation of the overall hand. Each finger marker and the dorsal array were encased in epoxy for protection and a single wire was routed along the arm to a nearby device that transmitted data wirelessly to a server computer outside the testing room. Data were acquired in real-time through a network interface using custom software developed in C++ using an SDK from the system manufacturer. Data were simultaneously saved to a file for later analysis and routed to the virtual reality simulation, running on a stand-alone computer, to animate the position and posture of a virtual hand model. The data required no filtering and no perceptible time lag was observed due to network transmission delays. Virtual Reality Simulation Simulation Hardware. The virtual reality simulation provided all visual cues to the monkeys during the behavioral task. It was displayed on a flat screen 3D monitor (SeeReal Technologies S.A., Luxembourg) mounted horizontally and directly above the seating area. Subjects were not required to wear anaglyphic glasses. The monitor generated a 3D screen image by vertically interlacing distinct left and right eye images, then projecting each through a beam splitter to the appropriate eye. This system did require the subject to maintain position in a sweet spot to produce the optimal 3D effect, which was easily accomplished since the subject’s head
  • 15. 57 was restrained throughout the course of an experiment. A mirror was located four inches in front of the monkey at a 45° angle to reflect the screen image from the monitor. The mirror extended down to approximately chin level, allowing subjects to use the arms freely in the workspace while at the same time hiding it from view. The simulation was generated using a dedicated computer to ensure that the computational load did not effect the operation of the Master Control Program (MCP), which was implemented in LabVIEW® on a separate computer. The MCP continuously read current motion capture data from the network, computed kinematic parameters, then transmitted them to the simulation computer (via User Datagram Protocol, UDP) which used the parameters to animate a virtual hand model. Virtual Modeling. The software implementation used a software toolkit (Vizard, WorldViz LLC, Santa Barbara, CA, USA) based on the Python (Python Software Foundation Corporation, DE, USA) programming language. The virtual hand model was a fully articulated (all digital joints) human hand included with the software toolkit. Animated degrees of freedom included 3D position, 3-axis rotation and grasp aperture (all digits). To animate the grasping motion, the rotation angle of all digit joints was scaled according to the current aperture estimate to produce a realistic representation of grasping movement. Virtual models of the grasp objects were generated from the same CAD models used to design and fabricate the physical objects with stereolithography. CAD models were simply converted to the Virtual Reality Modeling Language (VRML) format and imported into the virtual environment, resulting in exact representations of the original objects. Virtual grasp objects were located in the simulation environment in correspondence with physical objects presented in the workspace. That is, the transformation from camera coordinates (millimeters, origin at task start position) to simulation coordinates (non-dimensional) was tuned so that when the subject made contact with a physical object in the workspace, the virtual hand intersected the
  • 16. 58 virtual object in the simulation. Individual trials of the behavioral task began when a subject placed its right hand on a 4- in square hold pad located at mid-abdominal height. The virtual model of the hold pad was a simple flattened cube displayed in a position corresponding to the physical hold pad. Contact with the hold pad was monitored by a single touch sensor, identical to those used to sense finger placement on the physical objects. Virtual Task Training. Training in the virtual task required subjects to carry out the physical task, learned previously in full sight and with the aid of the F/T sensor and surface- mounted touch sensors, using cues only from the virtual environment. In actuality, this was a combined physical/virtual task with two variants. In the physical variant, an object was presented in the workspace, while in the virtual variant no object was presented. In both task variants, the appearance of grasp objects was used as a training aid. At the start of a trial, only the virtual hand and hold pad were displayed, the latter in red. Hand contact with the physical hold pad caused the virtual hold pad to turn green. After the required hold period, the virtual hold pad was removed, followed by presentation of a physical object in the workspace. The corresponding virtual object was then initially displayed in white but turned green whenever the subject physically interacted with the object in the workspace. Interaction was again determined by either the F/T sensor or touch sensors. These visual cues facilitated the training process for the virtual task and remained in place throughout the course of experimentation. In the virtual task variant, collision of the virtual hand and object models (detected by the simulation software) triggered the change in color. Neural Recording System Neurophysiological experimentation was accomplished using a 16-channel acquisition system for amplification, filtering and recording neural activity (MAP System, Plexon Inc.,
  • 17. 59 Dallas, TX, USA). The MAP box itself used digital signal processing for 40 kHz (25 µs) analog-to-digital conversion on each channel with 12-bit resolution. Included control software provided a suite of programs for real-time spike sorting, visualization and analysis of neural activity, all of which were run on a dedicated computer independent of the MCP. Digital Events were generated by the MCP to mark the occurrence of significant events during an experiment. Each event type was encoded as a unique 8-bit digital word using digital outputs from a 68-pin terminal block (SCC-68, National Instruments Corporation) and input directly to the MAP box, which saved spike times and digital event data (word value, time) to a single file. System Integration System Architecture. The preceding sections of this chapter described the sub-systems of the overall laboratory setup, including a 6-axis industrial robot, 6-DOF F/T sensor, tool changer, 3D motion capture system, virtual reality simulation and a neural data acquisition system. The LabVIEW® graphical programming environment was used to integrate these sub- systems into a coordinated whole. Initially, all subsystems were to be controlled by software running in a real-time operating environment (LabVIEW® Real-Time Module, National Instruments Corporation) uploaded to a dedicated target processor (PXI-6259, National Instruments Corporation) for deterministic performance. However, several subsystems required the Windows® operating system (Microsoft Corporation, Redmond, WA, USA) for programmatic control. This precluded a purely real-time application, which was based on the VxWorks operating system (Wind River Corporation, Alameda, CA, USA). Instead, a hybrid system was developed that coordinated the overall programmatic control of a single LabVIEW® application between the real-time processor (the target) and a standard personal computer (the host). System timing deferred to the target processor (1 µs resolution) and communication between the target and host was mediated through the local network (network shared variables). Program
  • 18. 60 development took place on the host computer. At run time, the MCP code was uploaded to the target computer where it was compiled and executed. System Operation. The MCP, which included compatible inputs (touch sensors, incoming UDP messages) and outputs (digital events, outgoing UDP messages), was executed on the real-time target. The robot control program ran on the host, awaiting movement commands from the MCP according to the progression of behavioral task stages. A separate program for monitoring F/T sensor output also ran on the host, providing continuous feedback related to object contact and monitoring the robot arm for excessive loading conditions. The MCP generated digital events in response to task events, which were routed directly to the MAP box of the neural recording system. The MCP also featured a continuous loop that read data from the motion capture server (TCP/IP protocol) and saved it to a local data file. Digital event markers were also written to the camera data file so that kinematic data and neural data, which were saved to different files, could later be temporally aligned by matching corresponding event markers. Kinematic parameters derived from the camera data were sent (via UDP) to the virtual reality simulation computer to animate the virtual hand model. UDP messages were also sent from the simulation to the MCP to report virtual object collisions.