Brain image fMRI data classification and graphical representation of visual object
Abstract
Analyzing neuroimaging data has become a research of interest these days because of
their several applications starting from analysis of brain region connectivity to analysis of
ventral streams and visual stimuli. In this paper, we propose a model that explains what
image a human brain visually perceives based on the neuroimaging information from the
ventral temporal cortex (VT) portion. In the model, we used the nilearn library from python
repository along with the haxby data set which includes a set of functional MRI from 6
subjects viewing images that contains a grid of black and white pictures of some certain
figures. Firstly, the haxby data set was collected and few pre-processing steps such as
masking, scaling and smoothing was done in order to reduce the complexity, noise and to
standardize the data. Then, the entire data set was cross validated into 80 percent of training
example and 20 percent of test example. After the splitting was done, the training examples
were passed through a set of learning frameworks such as ‘Nearest Neighbors’, ‘Linear
SVM’, ‘RBF SVM’, ‘Gaussian Process’, ‘Decision Tree’, ‘Random Forest’, ‘Neural Net’,
‘AdaBoost’, ‘Naive Bayes’ and ‘QDA’ algorithms. Completing the training, the accuracy of
the frameworks was tested and on an average the most accuracy of 95 percent was found
with Neural Network and Support Vector Machine (SVM) across all the subjects.