Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Package contents (file and directory structure)
squirrel.json
pipeline.json
The squirrel data format allows sharing of all information necessary to recreate an experiment and its results, from raw to analyzed data, and experiment parameters to analysis pipelines.
The squirrel format specification is implemented in NiDB. A DICOM-to-squirrel converter, and squirrel validator are available.
JSON array
An array of imaging studies, with information about each study. An imaging study (or imaging session) is defined as a set of related series collected on a piece of equipment during a time period. An example is a research participant receiving an MRI exam. The participant goes into the scanner, has several MR images collected, and comes out. The time spent in the scanner and all of the data collected from it is considered to be a study.
Valid squirrel modalities are derived from the DICOM standard and from NiDB modalities. Modality can be any string, but some squirrel readers may not correctly interpret the modality or may convert it to “other” or “unknown”. See full list of modalities.
*required
Files associated with this section are stored in the following directory. SubjectID
and StudyNum
are the actual subject ID and study number, for example /data/S1234ABC/1
.
/data/<SubjectID>/<StudyNum>
Variable
Type
Description
StudyNumber
number
Study number. May be sequential or correspond to NiDB assigned study number. REQUIRED
Datetime
datetime
Date of the study. REQUIRED
AgeAtStudy
number
Subject’s age in years at the time of the study. REQUIRED
Modality
string
Defines the type of data. See table of supported modalities. REQUIRED
Height
number
Height in m of the subject at the time of the study.
Weight
number
Weight in kg of the subject at the time of the study.
Description
string
Study description.
StudyUID
string
DICOM field StudyUID
.
VisitType
string
Type of visit. ex: Pre, Post.
DayNumber
number
For repeated studies and clinical trials, this indicates the day number of this study in relation to time 0.
TimePoint
number
Similar to day number, but this should be an ordinal number.
Equipment
string
Equipment name, on which the imaging session was collected.
VirtualPath
string
Relative path to the data within the package.
SeriesCount
number
Number of series for this study.
AnalysisCount
number
Number of analyses for this study.
JSON array
Array of series.
JSON array
Array of analyses.
Variable | Type | Description (acceptable values) |
| string | Unique ID of this subject. Each subject ID must be unique within the package. REQUIRED |
| JSON array | List of alternate IDs. Comma separated. |
| string | Globally unique identifier, from NDA. |
| date | Subject’s date of birth. REQUIRED |
| char | Sex at birth (F,M,O,U). |
| char | Self-identified gender. |
| string | NIH defined ethnicity: Usually |
| string | NIH defined race: |
| string | Relative path to the data within the package. |
| number | Number of studies. |
| number | Number of measures. |
| number | Number of drugs. |
JSON array | Array of imaging studies/sessions. |
JSON array | Array of measures. |
JSON array | Array of drugs. |
JSON array
‘Drugs’ represents any substances administered to a participant; through a clinical trial or the participant’s use of prescription or recreational drugs. Detailed variables are available to record exactly how much and when a drug is administered. This allows searching by dose amount, or other variable.
*required
The following examples convert between common language and the squirrel storage format
esomeprazole 20mg capsule by mouth daily
2 puffs atrovent inhaler every 6 hours
Variable | Value |
---|
Variable | Value |
---|
Variable | Type | Description |
| string | Name of the drug. REQUIRED |
| datetime | Datetime the drug was started. REQUIRED |
| datetime | Datetime the drug was stopped. |
| string | Full dosing string. Examples |
| number | In combination with other dose variables, the quantity of the drug. REQUIRED |
| string | Description of the frequency of administration. REQUIRED |
| string | Drug entry route (oral, IV, unknown, etc). |
| string | Drug class. |
| string | For clinical trials, the dose key. |
| string | mg, g, ml, tablets, capsules, etc. |
| string | Longer description. |
| string | Rater/experimenter name. |
| string | Notes about drug. |
| string | Date the record was first entered into a database. |
| string | Date the record was created in the current database. The original record may have been imported from another database. |
| string | Date the record was modified in the current database. |
DrugClass | PPI |
DrugName | esomeprazole |
DoseAmount | 20mg |
DoseFrequency | daily |
Route | oral |
DoseUnit | mg |
DrugName | ipratropium |
DrugClass | bronchodilator |
DoseAmount | 2 |
DoseFrequency | every 6 hours |
AdministrationRoute | inhaled |
DoseUnit | puffs |
JSON array
This object is an array of group analyses. A group analysis is considered an analysis involving more than one subject.
*required
Files associated with this section are stored in the following directory, where <GroupAnalysisName> is the name of the analysis.
/group-analysis/<GroupAnalysisName>/
Variable | Type | Description |
| string | Name of this group analysis. |
| string | Description. |
| string | Notes about the group analysis. |
| datetime | Datetime of the group analysis. |
| number | Number of files in the group analysis. |
| number | Size in bytes of the analysis. |
| string | Path to the group analysis data within the squirrel package. |
JSON array
Pipelines are the methods used to analyze data after it has been collected. In other words, the experiment provides the methods to collect the data and the pipelines provide the methods to analyze the data once it has been collected.
Basic pipeline information is stored in the main squirrel.json
file, and complete pipeline information is stored in the pipeline subdirectory in the pipeline.json
file.
*required
Files associated with this section are stored in the following directory. PipelineName
is the unique name of the pipeline.
/pipelines/<PipelineName>
Variable
Type
Description
ClusterType
string
Compute cluster engine (sge or slurm).
ClusterUser
string
Submit username.
ClusterQueue
string
Queue to submit jobs.
ClusterSubmitHost
string
Hostname to submit jobs.
CompleteFiles
JSON array
JSON array of complete files, with relative paths to analysisroot
.
CreateDate
datetime
Date the pipeline was created.
DataCopyMethod
string
How the data is copied to the analysis directory: cp
, softlink
, hardlink
.
DependencyDirectory
string
DependencyLevel
string
DependencyLinkType
string
Description
string
Longer pipeline description.
DirectoryStructure
string
Directory
string
Directory where the analyses for this pipeline will be stored. Leave blank to use the default location.
Group
string
ID or name of a group on which this pipeline will run
GroupType
string
Either subject or study
Level
number
subject-level analysis (1) or group-level analysis (2). REQUIRED
MaxWallTime
number
Maximum allowed clock (wall) time in minutes for the analysis to run.
ClusterMemory
number
Amount of memory in GB requested for a running job.
PipelineName
string
Pipeline name. REQUIRED
Notes
string
Extended notes about the pipeline
NumberConcurrentAnalyses
number
Number of analyses allowed to run at the same time. This number if managed by NiDB and is different than grid engine queue size.
ClusterNumberCores
number
Number of CPU cores requested for a running job.
ParentPipelines
string
Comma separated list of parent pipelines.
ResultScript
string
Executable script to be run at completion of the analysis to find and insert results back into NiDB.
SubmitDelay
number
Delay in hours, after the study datetime, to submit to the cluster. Allows time to upload behavioral data.
TempDirectory
string
The path to a temporary directory if it is used, on a compute node.
UseProfile
bool
true if using the profile option, false otherwise.
UseTempDirectory
bool
true if using a temporary directory, false otherwise.
Version
number
Version of the pipeline.
PrimaryScript
string
See details of pipeline scripts
SecondaryScript
string
See details of pipeline scripts.
VirtualPath
string
Path of this pipeline within the squirrel package.
DataStepCount
number
Number of data steps.
JSON array
JSON array
Experiments describe how data was collected from the participant. In other words, the methods used to get the data. This does not describe how to analyze the data once it’s collected.
*required
Files associated with this section are stored in the following directory. Where ExperimentName
is the unique name of the experiment.
/experiments/<ExperimentName>
Variable | Type | Description |
| string | Unique name of the experiment. REQUIRED |
| number | Number of files contained in the experiment. REQUIRED |
| number | Size, in bytes, of the experiment files. REQUIRED |
| string | Path to the experiment within the squirrel package. |
squirrel | BIDS | Notes |
---|---|---|
subject
sub-
directory
The subject object. BIDS sub-* directories contain the ID. squirrel objects are identified by the ID.
study
ses-
directory
*_sessions.tsv
Session/imaging study object.
series
*.nii.gz
files
*.nii
files
anat
directory
func
directory
fmap
directory
ieeg
directory
perf
directory
eeg
directory
*events.json
file
*events.tsv
file
<modality>.json
file
Mapping series within BIDS can be tricky. There is limited mapping between squirrel and BIDS for this object.
analysis
derivatives
directory
figures
directory
motion
directory
*_scans.tsv
file
The analysis results object/directory.
pipeline
code
directory
Code, pipelines, scripts to perform analysis on raw data.
experiment
task-*.json
task-*.tsv
Details on the experiment.
root -> description
dataset_description.json
Details about the dataset.
root -> changes
CHANGES
Any information about changes from to this dataset from a previous version.
root -> readme
README
README.md
More details about the dataset.
subject -> demographics
participants.tsv
participants.json
Details about subject demographics.
Overview of how to use the squirrel C++ library
The squirrel library is built using the Qt framework and gdcm. Both are available as open-source, and make development of the squirrel library much more efficient.
The Qt and gdcm libraries (or DLLs on Windows) will need to be redistributed along with any programs that use the squirrel library.
The squirrel library can be included at the top of your program. Make sure the path to the squirrel library is in the INCLUDE path for your compiler.
Create an object and read an existing squirrel package
All imaging data is stored in a Subject->Study(session)->Series hierarchy. Subjects are stored in the root of the squirrel object.
Access to these objects is similar to accessing subjects
JSON object
Format specification for v1.0
A squirrel contains a JSON file with meta-data about all of the data in the package, and a directory structure to store files. While many data items are optional, a squirrel package must contain a JSON file and a data directory.
JSON File
JSON is JavaScript object notation, and many tutorials are available for how to read and write JSON files. Within the squirrel format, keys are camel-case; for example dayNumber or dateOfBirth, where each word in the key is capitalized except the first word. The JSON file should be manually editable. JSON resources:
JSON tutorial - https://www.w3schools.com/js/js_json_intro.asp
JSON specification - https://www.json.org/json-en.html
Data types
The JSON specification includes several data types, but squirrel uses some derivative data types: string, number, date, datetime, char. Date, datetime, and char are stored as the JSON string datatype and should be enclosed in double quotes.
Directory Structure
The JSON file squirrel.json
is stored in the root directory. A directory called data
contains any data described in the JSON file. Files can be of any type, with file any extension. Because of the broad range of environments in which squirrel files are used, filenames must only contain alphanumeric characters. Filenames cannot contain special characters or spaces and must be less than 255 characters in length.
Squirrel Package
A squirrel package becomes a package once the entire directory structure is combined into a zip file. The compression level does not matter, as long as the file is a .zip archive. Once created, this package can be distributed to other instances of NiDB, squirrel readers, or simply unzipped and manually extracted. Packages can be created manually or exported using NiDB or squirrel converters.
JSON array
An array of series. Basic series information is stored in the main squirrel.json
file. Extended information including series parameters such as DICOM tags are stored in a params.json
file in the series directory.
* required
Files associated with this section are stored in the following directory. subjectID
, studyNum
, seriesNum
are the actual subject ID, study number, and series number. For example /data/S1234ABC/1/1
.
/data/<SubjectID>/<StudyNum>/<SeriesNum>
Behavioral data is stored in
/data/<SubjectID>/<StudyNum>/<SeriesNum>/beh
JSON object
This object contains information about the squirrel package.
*required
orig
- Original subject, study, series directory structure format. Example S1234ABC/1/1
seq
- Sequential. Zero-padded sequential numbers. Example 00001/0001/00001
orig
- Original, raw data format. If the original format was DICOM, the output format should be DICOM. See DICOM anonymization levels for details.
anon
- If original format is DICOM, write anonymized DICOM, removing most PHI, except dates. See DICOM anonymization levels for details.
anonfull
- If original format is DICOM, the files will be fully anonymized, by removing dates, times, locations in addition to PHI. See DICOM anonymization levels for details.
nifti3d
- Nifti 3D format
Example file001.nii
, file002.nii
, file003.nii
nifti3dgz
- gzipped Nifti 3D format
Example file001.nii.gz
, file002.nii.gz
, file003.nii.gz
nifti4d
- Nifti 4D format
Example file.nii
nifti4dgz
- gzipped Nifti 4D format
Example file.nii.gz
Notes about the package are stored here. This includes import and export logs, and notes from imported files. This is generally a freeform object, but notes can be divided into sections.
Files associated with this section are stored in the following directory
/
Section | Description |
---|---|
Modality | DICOM standard | NiDB support | Description |
---|---|---|---|
Variable
Type
Description
SubjectCount
number
Number of subjects in the package.
GroupAnalysisCount
number
Number of group analyses.
JSON array
Array containing the subjects.
JSON array
Array containing group analyses.
Variable
Type
Description
JSON object
Package information.
JSON object
Raw and analyzed data.
JSON object
Methods used to analyze the data.
JSON object
Experimental methods used to collect the data.
JSON object
Data dictionary containing descriptions, mappings, and key/value information for any variables in the package.
NumPipelines
number
Number of pipelines.
NumExperiments
number
Number of experiments.
TotalFileCount
number
Total number of data files in the package, excluding .json files.
TotalSize
number
Total size, in bytes, of the data files.
Type
Notes
Example
string
Regular string
“My string of text”
number
Any JSON acceptable number
3.14159 or 1000000
datetime
Datetime is formatted as YYYY-MM-DD HH:MI:SS
where all numbers are zero-padded and use a 24-hour clock. Datetime is stored as a JSON string datatype
“2022-12-03 15:34:56”
date
Date is formatted as YYYY-MM-DD
“1990-01-05”
char
A single character
F
bool
true or false
true
JSON array
Item is a JSON array of any data type
JSON object
Item is a JSON object
Variable
Type
Description
SeriesNumber
number
Series number. May be sequential, correspond to NiDB assigned series number, or taken from DICOM header
SeriesDatetime
date
Date of the series, usually taken from the DICOM header
SeriesUID
string
From the SeriesUID DICOM tag
Description
string
Description of the series
Protocol
string
Protocol name
ExperimentName
string
Experiment name associated with this series. Experiments link to the experiments section of the squirrel package
Size
number
Size of the data, in bytes
FileCount
number
Total number of files (including files in subdirs)
BehavioralSize
number
Size of beh data, in bytes
BehavioralFileCount
number
Total number of beh files (including files in subdirs)
JSON file
data/subjectID/studyNum/seriesNum/params.json
JSON object
Variable
Type
Description
PackageFormat
string
Always squirrel
.
SquirrelVersion
string
Squirrel format version.
SquirrelBuild
string
Build version of the squirrel library and utilities.
NiDBVersion
string
The NiDB version which wrote the package.
PackageName
string
Short name of the package.
Description
string
Longer description of the package.
Datetime
datetime
Datetime the package was created.
SubjectDirectoryFormat
string
orig
, seq
(see details below).
StudyDirectoryFormat
string
orig
, seq
(see details below).
SeriesDirectoryFormat
string
orig
, seq
(see details below).
DataFormat
string
Data format for imaging data to be written. Squirrel should attempt to convert to the specified format if possible. orig
, anon
, anonfull
, nifti3d
, nifti3dgz
, nifti4d
, nifti4dgz
(see details below).
License
string
Any sharing or license notes, or LICENSE files.
Readme
string
Any README files.
Changes
string
Any CHANGE files.
Notes
JSON object
See details below.
import
Any notes related to import. BIDS files such as README and CHANGES are stored here.
merge
Any notes related to the merging of datasets. Such as information about renumbering of subject IDs
export
Any notes related to the export process
ASSESSMENT
✓
Paper based assessment
AU
✓
Audio ECG
AUDIO
✓
Audio files
BI
✓
Biomagnetic imaging
CD
✓
Color flow Doppler
CONSENT
✓
Scanned image of a consent form
CR
✓
✓
Computed Radiography
CR
✓
Computed radiography (digital x-ray)
CT
✓
✓
Computed Tomography
DD
✓
Duplex Doppler
DG
✓
Diaphanography
DOC
✓
Scanned documents
DX
✓
Digital Radiography
ECG
✓
Electrocardiogram
EEG
✓
Electroencephalography
EPS
✓
Cardiac Electrophysiology
ES
✓
Endoscopy
ET
✓
Eye-tracking
GM
✓
General Microscopy
GSR
✓
Galvanic skin response
HC
✓
Hard Copy
HD
✓
Hemodynamic Waveform
IO
✓
Intra-oral Radiography
IVUS
✓
Intravascular Ultrasound
LS
✓
Laser surface scan
MEG
✓
Magnetoencephalography
MG
✓
Mammography
MR
✓
✓
MRI - Magnetic Resonance Imaging
NM
✓
Nuclear Medicine
OP
✓
Ophthalmic Photography
OT
✓
✓
Other DICOM
PPI
✓
Pre-pulse inhibition
PR
✓
✓
Presentation State
PT
✓
✓
Positron emission tomography (PET)
PX
✓
Panoramic X-Ray
RF
✓
Radio Fluoroscopy
RG
✓
Radiographic imaging (conventional film/screen)
RTDOSE
✓
Radiotherapy Dose
RTIMAGE
✓
Radiotherapy Image
RTPLAN
✓
Radiotherapy Plan
RTRECORD
✓
RT Treatment Record
RTSTRUCT
✓
Radiotherapy Structure Set
SM
✓
Slide Microscopy
SMR
✓
Stereometric Relationship
SNP
✓
SNP genetic information
SR
✓
✓
Structured reporting document
ST
✓
Single-photon emission computed tomography (SPECT)
SURGERY
✓
Pre-surgical Mapping
TASK
✓
Task
TG
✓
Thermography
TMS
✓
Transcranial magnetic stimulation
US
✓
✓
Ultrasound
VIDEO
✓
Video
XA
✓
✓
X-Ray Angiography
XC
✓
External-camera Photography
XRAY
✓
X-ray
JSON object
The data-dictionary object stores information describing mappings or any other descriptive information about the data. This can also contain any information that doesn't fit elsewhere in the squirrel package, such as project descriptions.
Examples include mapping numeric values (1,2,3,...) to descriptions (right, left, ambi, ...)
data-dictionary
data-dictionary-item
Files associated with this section are stored in the following directory.
/data-dictionary
Details about how pipeline scripts are formatted for squirrel and NiDB
Pipeline scripts are meant to run in bash
. They are traditionally formatted to run with a RHEL distribution such as CentOS or Rocky Linux. The scripts are bash compliant, but have some nuances that allow them to run more effectively under an NiDB pipeline setup.
The bash script is interpreted to run on a cluster. Some commands are added to your script to allow it to check in and give status to NiDB as it is running.
There is no need for a shebang line at the beginning (for example #!/bin/sh) because this script is only interested in the commands being run.
Example script...
Before being submitted to the cluster, the script is passed through the NiDB interpreter, and the actual bash script will look like below. This script is running on subject S2907GCS
, study 8
, under the freesurferUnified6
pipeline. This script will then be submitted to the cluster.
... script is submitted to the cluster
How to interpret the altered script
Details for the grid engine are added at the beginning
This includes max wall time, output directories, run-as user, etc
Each command is changed to include logging and check-ins
nidb cluster -u pipelinecheckin
checks in to the database the current step. This is displayed on the Pipelines --> Analysis webpage
Each command is also echoed to the grid engine log file so you can check the log file for the status
The output of each command is echoed to a separate log file in the last line using the >>
There are a few pipeline variables that are interpreted by NiDB when running. The variable is replaced with the value before the final script is written out. Each study on which a pipeline runs will have a different script, with different paths, IDs, and other variables listed below.
Separate JSON file - params.json
Series collection parameters are stored in a separate JSON file called params.json
stored in the series directory. The JSON object is an array of key-value pairs. This can be used for MRI sequence parameters.
All DICOM tags are acceptable parameters. See this list for available DICOM tags https://exiftool.org/TagNames/DICOM.html. Variable keys can be either the hexadecimal format (ID) or string format (Name). For example 0018:1030
or ProtocolName
. The params object contains any number of key/value pairs.
Files associated with this section are stored in the following directory. subjectID
, studyNum
, seriesNum
are the actual subject ID, study number, and series number. For example /data/S1234ABC/1/1
.
/data/<SubjectID>/<StudyNum>/<SeriesNum>/params.json
The following OS configurations have been tested to build squirrel with Qt 6.5
Compatible
RHEL compatible Linux 8 (not 8.6)
CentOS 8 (not CentOS 8 Stream)
CentOS 7
Windows 10/11
squirrel library and utils cannot be built on CentOS Stream 8 or Rocky Linux 8.6. There are kernel bugs which do not work correctly with Qt's QProcess library. This can lead to inconsistencies when running shell commands, and qmake build errors.
Other OS configurations may work to build squirrel, but have not been tested.
Install the following as root
Install Qt
Make the installer executable chmod 777 qt-unified-linux-x64-x.x.x-online.run
Run ./qt-unified-linux-x64-x.x.x-online.run
The Qt Maintenance Tool will start. An account is required to download Qt open source.
On the components screen, select the checkbox for Qt 6.5.3 → Desktop gcc 64-bit
Install the following as root
Install Qt
Make the installer executable chmod 777 qt-unified-linux-x64-x.x.x-online.run
Run ./qt-unified-linux-x64-x.x.x-online.run
The Qt Maintenance Tool will start. An account is required to download Qt open source.
On the components screen, select the checkbox for Qt 6.5.3 → Desktop gcc 64-bit
Install the following as root
Install Qt
Make the installer executable chmod 777 qt-unified-linux-x64-x.x.x-online.run
Run ./qt-unified-linux-x64-x.x.x-online.run
The Qt Maintenance Tool will start. An account is required to download Qt open source.
On the components screen, select the checkbox for Qt 6.5.3 → Desktop gcc 64-bit
Install the following as root
Install Qt
Make the installer executable chmod 777 qt-unified-linux-x64-x.x.x-online.run
Run ./qt-unified-linux-x64-x.x.x-online.run
The Qt Maintenance Tool will start. An account is required to download Qt open source.
On the components screen, select the checkbox for Qt 6.5.3 → Desktop gcc 64-bit
Install build environment
Install Qt 6.4.2 for MSVC2019 x64
Install Qt
Run the setup program.
The Qt Maintenance Tool will start. An account is required to download Qt open source.
On the components screen, select the checkbox for Qt 6.5.3 → MSVC 2019 64-bit
Once the build environment is setup, the build process can be performed by script. The build.sh
script will build the squirrel library files and the squirrel utils.
The first time building squirrel on this machine, perform the following
This will build gdcm (squirrel depends on GDCM for reading DICOM headers), squirrel lib, and squirrel-gui.
All subsequent builds on this machine can be done with the following
Using Github Desktop, clone the squirrel repository to C:\squirrel
Build GDCM
Open CMake
Set source directory to C:\squirrel\src\gdcm
Set build directory to C:\squirrel\bin\gdcm
Click Configure (click Yes to create the build directory)
Select Visual Studio 16 2019. Click Finish
After it's done generating, make sure GDCM_BUILD_SHARED_LIBS
is checked
Click Configure again
Click Generate. This will create the Visual Studio solution and project files
Open the C:\squirrel\bin\gdcm\GDCM.sln
file in Visual Studio
Change the build to Release
Right-click ALL_BUILD and click Build
Build squirrel library
Double-click C:\squirrel\src\squirrel\squirrellib.pro
Configure the project for Qt 6.4.2 as necessary
Switch the build to Release and build it
squirrel.dll
and squirrel.lib
will now be in C:\squirrel\bin\squirrel
Build squirrel-gui
Configure the project for Qt 6.4.2 as necessary
Double-click C:\squirrel\src\squirrel-gui\squirrel-gui.pro
Switch the build to Release and build it
Once you've been granted access to the squirrel project on github, you'll need to add your server's SSH key to your github account (github.com --> click your username --> Settings --> SSH and GPG keys). There are directions on the github site for how to do this. Then you can clone the current source code into your server.
This will create a git repository called squirrel in your home directory.
To keep your local copy of the repository up to date, you'll need to pull any changes from github.
This may happen if the build machine does not have enough RAM or processors. More likely this is happening inside of a VM in which the VM does not have enough RAM or processors allocated.
This error happens because of a kernel bug in RHEL 8.6. Downgrade to 8.5 or upgrade to 8.7.
This example is from the nidb example. If you get an error similar to the following, you'll need to install the missing library
You can check which libraries are missing by running ldd
on the nidb
executable
Copy the missing library file(s) to /lib
as root. Then run ldconfig
to register any new libraries.
If you are using a virtual machine to build, there are a couple of weird bugs in VMWare Workstation Player (possibly other VMWare products as well) where the network adapters on a Linux guest simply stop working. You can't activate them, you can't do anything with them, they just are offline and can't be activated. Or it's connected and network connection is present, but your VM is inaccessible from the outside.
Try these fixes to get the network back:
While the VM is running, suspend the guest OS. Wait for it to suspend and close itself. Then resume the guest OS. No idea why, but this should fix the lack of network adapter in Linux.
Open the VM settings. Go to network, and click the button to edit the bridged adapters. Uncheck the VM adapter. This is if you are using bridged networking only.
Switch to NAT networking. This may be better if you are connected to a public wifi.
Copy the squirrel library files to the lib directory. The libs will then be available for the whole system.
Download Qt open-source from
Download Qt open-source from
Download Qt open-source from
Download Qt open-source from
Install edition, available from Microsoft. Install the C++ extensions.
Install
Install , or TortoiseGit, or other Git interface
Download Qt open-source from
Variable
Type
Description
MeasureName
string
Name of the measure. REQUIRED
DateStart
datetime
Start datetime of the measurement. REQUIRED
DateEnd
datetime
End datetime of the measurement.
DateRecordCreate
datetime
Date the record was created in the current database. The original record may have been imported from another database.
DateRecordEntry
datetime
Date the record was first entered into a database.
DateRecordModify
datetime
Date the record was modified in the current database.
InstrumentName
string
Name of the instrument associated with this measure.
Rater
string
Name of the rater.
Notes
string
Detailed notes.
Value
string
Value (string or number).
Description
string
Longer description of the measure.
Duration
number
Duration of the measure in seconds, if known.
Variable
Type
Description
NumFiles
number
Number of files contained in the experiment. REQUIRED
Size
number
Size, in bytes, of the experiment files. REQUIRED
VirtualPath
string
Path to the data-dictionary within the squirrel package.
data-dictionary-item
JSON array
Array of data dictionary items.
Variable
Type
Description
VariableType
string
Type of variable. REQUIRED
VariableName
string
Name of the variable. REQUIRED
Description
string
Description of the variable.
KeyValueMapping
string
List of possible key/value mappings in the format key1=value1, key2=value2
. Example 1=Female, 2=Male
ExpectedTimepoints
number
Number of expected timepoints. Example, the study is expected to have 5 records of a variable.
RangeLow
number
For numeric values, the lower limit.
RangeHigh
number
For numeric values, the upper limit.
Variable
Type
Description
PipelineName
string
Name of the pipeline used to generate these results. REQUIRED
PipelineVersion
number
Version of the pipeline used. REQUIRED
DateStart
date
Datetime of the start of the analysis. REQUIRED
DateEnd
date
Datetime of the end of the analysis.
DateClusterStart
date
Datetime the job began running on the cluster.
DateClusterEnd
date
Datetime the job finished running on the cluster.
SetupTime
number
Elapsed wall time, in seconds, to copy data and set up analysis.
RunTime
number
Elapsed wall time, in seconds, to run the analysis after setup.
SeriesCount
number
Number of series downloaded/used to perform analysis.
Successful
bool
Analysis ran to completion without error and expected files were created.
Size
number
Size in bytes of the analysis.
Hostname
string
If run on a cluster, the hostname of the node on which the analysis run.
Status
string
Status, should always be ‘complete’.
StatusMessage
string
Last running status message.
VirtualPath
string
Relative path to the data within the package.
Variable
Type
Description
AssociationType
string
study
, or subject
. REQUIRED
BehavioralDirectory
string
if BehFormat
writes data to a sub directory, the directory should be named this.
BehavioralDirectoryFormat
string
nobeh
, behroot
, behseries
, behseriesdir
DataFormat
string
native
, dicom
, nifti3d
, nifti4d
, analyze3d
, analyze4d
, bids
. REQUIRED
Enabled
bool
Whether the step is enabled or not.
Gzip
bool
Whether to gzip data if converted to Nifti.
ImageType
string
Comma separated list of image types, often derived from the DICOM ImageType tag, (0008:0008).
DataLevel
string
nearestintime
, samestudy
. Where is the data coming from. REQUIRED
Location
string
Directory, relative to the analysisroot
, where this data will be written.
Modality
string
Modality to search for. REQUIRED
NumberBOLDreps
string
If SeriesCriteria
is set to usecriteria
, then search based on this option.
NumberImagesCriteria
string
Optional
bool
Whether this step is optional or not. If not optional, the analysis will not run if the data step is not found. REQUIRED
Order
number
The numerical order of this particular step. REQUIRED
PreserveSeries
bool
Whether to preserve series numbers or to assign new ordinal numbers.
PrimaryProtocol
bool
This data step determines the primary study, from which subsequent analyses are run.
Protocol
string
Comma separated list of protocol name(s).
SeriesCriteria
string
Criteria for which series are downloaded if more than one matches criteria: all
, first
, last
, largest
, smallest
, usecriteria
.
UsePhaseDirectory
bool
Write data to a sub directory based on the phase encoding direction.
UseSeriesDirectory
bool
Write each series to an individually numbered directory.
Variable
Description
{NOLOG}
This does not append >>
to the end of a command to log the output
{NOCHECKIN}
This does not prepend a command with a check in, and does not echo the command being run. This is useful (necessary) when running multi-line commands like for loops and if/then statements
{PROFILE}
This prepends the command with a profiler to output information about CPU and memory usage.
{analysisrootdir}
The full path to the analysis root directory. ex /home/user/thePipeline/S1234ABC/1/
{subjectuid}
The UID of the subject being analyzed. Ex S1234ABC
{studynum}
The study number of the study being analyzed. ex 2
{uidstudynum}
UID and studynumber together. ex S1234ABC2
{pipelinename}
The pipeline name
{studydatetime}
The study datetime. ex 2022-07-04 12:34:56
{first_ext_file}
Replaces the variable with the first file (alphabetically) found with the ext
extension
{first_n_ext_files}
Replaces the variable with the first N
files (alphabetically) found with the ext
extension
{last_ext_file}
Replaces the variable with the last file (alphabetically) found with the ext
extension
{all_ext_files}
Replaces the variable with all files (alphabetically) found with the ext
extension
{command}
The command being run. ex ls -l
{workingdir}
The current working directory
{description}
The description of the command. This is anything following the #
, also called a comment
{analysisid}
The analysisID of the analysis. This is useful when inserting analysis results, as the analysisID is required to do that
{subjectuids}
[Second level analysis] List of subjectIDs
{studydatetimes}
[Second level analysis] List of studyDateTimes in the group
{analysisgroupid}
[Second level analysis] The analysisID
{uidstudynums}
[Second level analysis] List of UIDStudyNums
{numsubjects}
[Second level analysis] Total number of subjects in the group analysis
{groups}
[Second level analysis] List of group names contributing to the group analysis. Sometimes this can be used when comparing groups
{numsubjects_groupname}
[Second level analysis] Number of subjects within the specified groupname
{uidstudynums_groupname}
[Second level analysis] Number of studies within the specified groupname
Variable
Description
Example
{Key:Value}
A unique key, sometimes derived from the DICOM header
Protocol, T1w FieldStrength, 3.0