neurd package

Subpackages

Submodules

neurd.ais_utils module

neurd.ais_utils.ais_count_bins_dict(ais_distance_min=0, ais_distance_max=50000, interval_dist=None, n_intervals=20, verbose=False, name_method='upper', ais_distance_name='ais_distance', prefix='n_ais')[source]

Purpose: To generate a dictionary that creates projections to discretely count what bins the ais synapses are in

Pseudocode: 1) using the min, max and step size compute the intervals 2) develop a dictionary mapping a name to the dj query for that interval

neurd.ais_utils.filter_ais_df_cell_type_splits_by_n_ais_perc(df, column='total_ais_postsyn', percentile=99, category_columns='cell_type', verbose=True)[source]

Purpose: Want to filter the excitatory and inhibitory cells for a certain percentile

neurd.ais_utils.n_ais_columns(df, min_dist=0, max_dist=inf, verbose=False)[source]
neurd.ais_utils.n_ais_sum_from_min_max_dist(df, min_dist=10000, max_dist=40000, column=None, verbose=False)[source]

Purpose: return sum sum of a range of min dist to max dist

Pseudocode: 1) Get all of the column in range

neurd.apical_utils module

Module for helping to classify the different compartments

Compartment List and Description

axon: soma: apical_shaft (only excitatory):

the first protrusion of the apical neurite from the soma (the shaft generally projects upward towards the top of the volume as a single lined entity

apical (only excitatory):

all branches downstream of the first apical shaft protrusion (includes apical_shaft,apical_tuft,oblique). This compartment includes offshoots of the apical shaft that were not close to a 90 degree protrusion angle with respect to the the apical shaft trajectory

apical_tuft (only excitatory):

the branches downstream of the point at which the apical shaft terminates its striaght upward trajectory and a branching point of 2 or more offshoots with a non straight upward trajectory

oblique (only excitatory):

offshoots of the apical shaft (before the apical_tuft section) that were close to a 90 degree protrusion angle with respect to the the apical shaft trajectory

dendrite (only inhibitory):

any non-axon neurites on an inhibitory cell

basal (only excitatory):

any non-apical neurites on an excitatory cell

Good neuron to show off for classification

old_seg_id = 864691135099943968 neuron_obj = du.decomposition_with_spine_recalculation(old_seg_id,0) from . import apical_utils as apu apu.apical_classification(neuron_obj,

plot_labels=True, verbose = True

)

neurd.apical_utils.add_compartment_coarse_fine_to_df(df, compartment_column='compartment')[source]

Purpose: To add compartment coarse and fine to a dataframe with a compartment column

neurd.apical_utils.apical_classification(neuron_obj, soma_angle_max=None, plot_filtered_limbs=False, multi_apical_height=None, plot_shaft_like_limb_branch=False, plot_candidates=False, candidate_connected_component_radius=None, multi_apical_possible=None, plot_base_filtered_candidates=False, plot_base_filtered_df=False, verbose=False, plot_winning_candidate=False, print_df_for_filter_to_one=False, plot_winning_candidate_expanded=False, plot_apical_limb_branch=False, label_basal=True, label_apical_tuft=True, label_oblique=True, plot_labels=False, apply_synapse_compartment_labels=True, rotation_function=None, unrotation_function=None, plot_rotated_function=False)[source]

Purpose: Will identify the limb branch that represents the apical shaft

Pseudocode: 1) Filter the limbs that are being considered to have the shaft 2) Find the shalft like limb branch 3) Divide the shaft like limb branch into candidates 4) Filter the shaft candidates for bare minimum requirements 5) Filter shaft candidates to one winner 6) If a winner was found –> expand the shaft candidate to connect to soma 7) Convert the winning candidate into a limb branch dict 8) Find the limb branch dict of all the apical 9) Add the apical_shaft and apical labels

Ex: apical_limb_branch_dict = apu.apical_classification(n_test,

plot_labels = True,

verbose = False)

neurd.apical_utils.apical_classification_high_soma_center(neuron_obj, possible_apical_limbs=None, soma_angle_max=None, plot_filtered_limbs=False, width_min=450, distance_from_soma=80000, plot_thick_apical_candidates=False, min_thick_near_soma_skeletal_length=10000, plot_final_apical=False, verbose=False)[source]

Purpose: To identify multiple possible apicals that are at the top of the soma mesh

Pseudocode: 1) If necessary, filter the limbs 2) ON the limbs find the number of fat limbs within certain radius of soma

Ex: apu.apical_classification_high_soma_center(n_obj_1,

verbose = True,

plot_final_apical=True)

neurd.apical_utils.apical_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.apical_shaft_classification_old(neuron_obj, candidate=None, limb_branch_dict=None, plot_shaft_branches=False, max_distance_from_soma_for_start_node=3000, plot_shaft_candidates=False, verbose=False, skip_distance=5000, plot_filtered_candidates=False, plot_final_shaft=False, return_limb_branch_dict=True, **kwargs)[source]

Purpose: Will find apical shaft on apical candidates

Psueodocode: 1) Find the apical shaft query 2) If there are shaft branches, divide them into candidates 3) For each candidate filter into non-branching branch list

  1. Pick the largest candidate as winner

Ex: from neurd import apical_utils as apu apu.apical_shaft_classification(neuron_obj,

candidate=apical_candidates[0], verbose = True, plot_shaft_branches=True, plot_shaft_candidates=True,

plot_filtered_candidates = True,

plot_final_shaft=True,

skip_distance = 100000,

)

neurd.apical_utils.apical_shaft_direct_downstream(neuron_obj, downstream_buffer_from_soma=3000, plot_limb_branch=False, verbose=False)[source]

Purpose: To find those branches that come directly off the apical shaft

Ex: apu.apical_shaft_direct_downstream(n_test,

plot_limb_branch=True)

neurd.apical_utils.apical_shaft_like_limb_branch(neuron_obj, candidate=None, limb_branch_dict=None, limbs_to_process=None, max_upward_angle=None, min_upward_length=None, min_upward_per_match=None, min_upward_length_backup=None, min_upward_per_match_backup=None, width_min=None, plot_shaft_branches=False, verbose=False)[source]

Purpose: Will filter the limb branch for those that are apical shaft like

neurd.apical_utils.apical_shaft_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.apical_total_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.apical_tuft_classification(neuron_obj, plot_apical_tuft=False, add_labels=True, clear_prior_labels=True, add_low_degree_apicals_off_shaft=None, low_degree_apicals_min_angle=None, low_degree_apicals_max_angle=None, verbose=False, label='apical_tuft')[source]

Purpose: To classify the apical tuft branches based on previous apical shaft and apical classification

Pseudocode: 1) Get all of the nodes of the apical shaft that have no downstream apical shaft branches (assemble them into a limb branch) 2) make all downstream branches of those the apical tuft

Ex: apu.apical_tuft_classification(neuron_obj,

plot_apical_tuft = True, add_labels = True,

clear_prior_labels=True,

verbose = True)

neurd.apical_utils.apical_tuft_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.axon_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.basal_classfication(neuron_obj, plot_basal=False, add_labels=True, clear_prior_labels=True, verbose=False)[source]

Purpose: To identify and label the basal branches

neurd.apical_utils.basal_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.coarse_fine_compartment_from_label(label)[source]
neurd.apical_utils.colors_from_compartments(compartments)[source]
neurd.apical_utils.compartment_classification_by_cell_type(neuron_obj, cell_type, verbose=False, plot_compartments=False, apply_synapse_compartment_labels=True, **kwargs)[source]

Purpose: Will label a neuron by the compartments based on the cell type

neurd.apical_utils.compartment_feature_over_limb_branch_dict(neuron_obj, compartment_label, feature_for_sum=None, feature_func=None, verbose=False)[source]

To compute a certain feature over ONE compartment

neurd.apical_utils.compartment_features_from_skeleton_and_soma_center(neuron_obj, compartment_label, features_to_exclude=('length', 'n_branches'), soma_label='S0', soma_center=None, name_prefix=None, include_soma_starting_angles=True, neuron_obj_aligned=None, **kwargs)[source]

Purpose: Will compute features about a compartment from its skeleton and the skeleton in relation to the soma

Ex: apu.compartment_features_from_skeleton_and_soma_center(neuron_obj_proof,

compartment_label = “oblique”)

neurd.apical_utils.compartment_from_branch(branch_obj)[source]
neurd.apical_utils.compartment_from_face_overlap_with_comp_faces_dict(mesh_face_idx, comp_faces_dict, default_value=None, verbose=False)[source]

Purpose: Want to find the compartment of a branch if we know the faces index for a reference mesh and the compartments for a reference mesh

Pseudocode: Iterate through all compartments in compartment dict 1) Find the overlap between faces and compartment 2) if the overlap is greater than 0 and greater than current max then set as compartment

Ex: from neurd import neuron_utils as nru neuron_obj = vdi.neuron_objs_from_cell_type_stage(segment_id)

decimated_mesh = vdi.fetch_segment_id_mesh(segment_id) proofread_faces = vdi.fetch_proofread_neuron_faces(segment_id,split_index = split_index) limb_branch_dict = None

limb_branch_face_dict = nru.limb_branch_face_idx_dict_from_neuron_obj_overlap_with_face_idx_on_reference_mesh(

neuron_obj, faces_idx = proofread_faces, mesh_reference = decimated_mesh, limb_branch_dict = limb_branch_dict, verbose = False

)

comp_faces_dict = vdi.compartment_faces_dict(segment_id,verbose=False)

apu.compartment_from_face_overlap_with_comp_faces_dict(

mesh_face_idx = limb_branch_face_dict[“L0”][2], comp_faces_dict = comp_faces_dict, verbose = True

)

neurd.apical_utils.compartment_label_from_branch_obj(branch_obj, label_order=['apical_tuft', 'apical_shaft', 'oblique', 'apical', 'basal', 'axon'], default_label='dendrite', verbose=False)[source]

Purpose: To add compartment labels to all the synapses of a branch based on the branch labels

Pseudocode: 0) Define order of labels to check for For each label 1) check to see if label is in the labels of branch 2a) if so then add the numbered label to all the synapses in the branch and break 2b) if not then continue to the next label

Ex: apu.compartment_label_from_branch_obj(branch_obj = neuron_obj[0][0],

verbose = True)

neurd.apical_utils.compartment_label_to_all_labels(label)[source]
neurd.apical_utils.compartment_labels_for_externals()[source]
neurd.apical_utils.compartment_labels_for_stats()[source]
neurd.apical_utils.compartment_labels_for_synapses_stats()[source]
neurd.apical_utils.compartment_limb_branch_dict(neuron_obj, compartment_labels, not_matching_labels=None, match_type='any', **kwargs)[source]
neurd.apical_utils.compartment_mesh(neuron_obj, compartment_label)[source]
neurd.apical_utils.compartment_skeleton(neuron_obj, compartment_label)[source]
neurd.apical_utils.compartments_feature_over_limb_branch_dict(neuron_obj, compartment_labels=None, feature_for_sum=None, feature_func=None, feature_name=None, verbose=False)[source]

Purpose: To compute statistics for all the compartments of a neuron_obj

Pseudocode: For each compartment label: 1) Get the limb branch dict 2) Get the stats over that limb branch 3) Add to larger dict with modified names

Ex: apu.compartment_stats(neuron_obj_proof,

compartment_labels = None, verbose = True)

neurd.apical_utils.compartments_from_cell_type(cell_type=None)
neurd.apical_utils.compartments_mesh(neuron_obj, compartment_labels=None, verbose=False)[source]
neurd.apical_utils.compartments_skeleton(neuron_obj, compartment_labels=None, verbose=False)[source]
neurd.apical_utils.compartments_stats(neuron_obj, compartment_labels=None, verbose=False)[source]

Purpose: To compute statistics for all the compartments of a neuron_obj

Pseudocode: For each compartment label: 1) Get the limb branch dict 2) Get the stats over that limb branch 3) Add to larger dict with modified names

Ex: apu.compartment_stats(neuron_obj_proof,

compartment_labels = None, verbose = True)

neurd.apical_utils.compartments_to_plot(cell_type=None)[source]
neurd.apical_utils.dendrite_compartment_labels()[source]
neurd.apical_utils.dendrite_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.expand_candidate_branches_to_soma(neuron_obj, candidate, verbose=False, plot_candidate=False)[source]
neurd.apical_utils.filter_apical_candidates(neuron_obj, candidates, min_skeletal_length=None, min_distance_above_soma=None, verbose=False, print_node_attributes=False, plot_candidates=False)[source]
neurd.apical_utils.filter_apical_candidates_to_one(neuron_obj, candidates, non_upward_skeletal_distance_upstream_buffer=None, soma_diff_buffer=None, downstream_vector_diff_buffer=None, verbose=False, default_tie_breaker=None, plot_final_candidate=False, print_df_for_filter_to_one=False)[source]

Purpose: Will filter down the remaining candidates to just one optimal

neurd.apical_utils.limb_branch_compartment_dict_from_limb_branch_face_and_compartment_faces_dict(limb_branch_face_dict, compartment_faces_dict, verbose=False)[source]
neurd.apical_utils.limb_features_from_compartment(neuron_obj, compartment_label=None, compartment_limb_branch=None, verbose=False, rotation_function=None, apply_rotation=True, **kwargs)[source]

Purpose: To compute limb features that depend on alignment of neuron

Pseudocode: 1) Align neuron 2) Get the compartment limb branch dict

Ex: apu.limb_features_from_compartment(

neuron_obj, compartment_limb_branch=neuron_obj.dendrite_limb_branch_dict, compartment_label=”axon”, apply_rotation=True,

)

neurd.apical_utils.limb_features_from_compartment_over_neuron(neuron_obj, compartments=('basal', 'apical_total', 'axon', 'dendrite'), rotation_function=None, verbose=False)[source]

Purpose: To run limb features for overview compartments

neurd.apical_utils.max_height_for_multi_soma()[source]
neurd.apical_utils.multi_soma_y()
neurd.apical_utils.non_upward_skeletal_distance_upstream(neuron_obj, candidate, max_angle=10000, min_angle=40, verbose=False, **kwargs)[source]

Purpose: Will find the amount of non-upward facing skeletal lengths upstream of a certain candidate

neurd.apical_utils.oblique_classification(neuron_obj, plot_apical_shaft_direct_downstream=False, min_angle=None, max_angle=None, per_match_ref_vector_min=None, dist_match_ref_vector_min=None, plot_oblique_start=False, plot_oblique=False, add_labels=True, clear_prior_labels=True, verbose=False, label='oblique')[source]

Purpose: To find the branches that come of the apical shaft at a certain degree

Pseudocode: 1) Get the apical_shaft_direct_downstream 2) Filter those branches for those with a certain angle 3) Find all branches downstream of those with a certain angle as the oblique branches

neurd.apical_utils.oblique_limb_branch_dict(neuron_obj)[source]
neurd.apical_utils.plot_compartment_mesh_and_skeleton(neuron_obj, compartment_label)[source]

Ex: apu.plot_compartment_mesh_and_skeleton(neuron_obj_proof,”basal”)

neurd.apical_utils.print_compartment_features_dict_for_dj_table(comp_feature_dict)[source]
neurd.apical_utils.set_neuron_synapses_compartment(neuron_obj, **kwargs)[source]

Purpose: Will set the compartment labels of all synapses based on the compartment label of te branch

neurd.apical_utils.soma_angle_extrema_from_compartment(neuron_obj, compartment_label=None, compartment_limb_branch=None, default_value=None, extrema_type='min', verbose=False)[source]

Purpose: Find the max or min soma starting angle for all limbs with that compartment

neurd.apical_utils.soma_angle_max_from_compartment(neuron_obj, compartment_label=None, compartment_limb_branch=None, default_value=None, verbose=False)[source]
neurd.apical_utils.soma_angle_min_from_compartment(neuron_obj, compartment_label=None, compartment_limb_branch=None, default_value=None, verbose=False)[source]
neurd.apical_utils.spine_labels_from_compartment(label)[source]
neurd.apical_utils.syn_type_from_compartment(label)[source]

neurd.axon_utils module

neurd.axon_utils.axon_angles(neuron_obj, verbose=False)[source]

Purpose: To compute the axon angles for a neuron object that already has the axon identified

neurd.axon_utils.axon_angles_from_neuron(neuron_obj, return_max_min=True, return_n_angles=True, downstream_distance_for_axon_angle=None, verbose=False, return_dict=True, **kwargs)[source]

Purpose: to find the axon starting angle given an axon has already been classified

Pseudocode: 1) Get the name of the axon limb 2) if it has a name: - get the axon limb branch dict and extract out the branches 3) send the branches to the trajectory angle function to get the axon angles

neurd.axon_utils.axon_branching_attributes(neuron_obj, limb_idx, branch_idx, verbose=False)[source]

Purpose: Will compute a lot of statistics about the branching behavior in an axon branching point

neurd.axon_utils.axon_classification_excitatory(neuron_obj, axon_soma_angle_threshold=None, ais_max_distance_from_soma=None, axon_classification_without_synapses_if_no_candidate=None, axon_classification_without_synapses=None, min_distance_from_soma_dendr_on_axon=None, ais_syn_density_max=None, ais_syn_density_max_backup=None, **kwargs)[source]

Purpose: To label the axon on an excitatory neuron

Example:

segment_id = 864691136333776819 neuron_obj = du.decomposition_with_spine_recalculation(segment_id,0,

ignore_DecompositionCellType=True )

validation=True n_obj_exc_1 = syu.add_synapses_to_neuron_obj(neuron_obj,

validation = validation, verbose = True, original_mesh = None, plot_valid_error_synapses = True, calculate_synapse_soma_distance = False, add_valid_synapses = True,

add_error_synapses=False)

au.axon_classification_excitatory(

neuron_obj = n_obj_exc_1

) nviz.plot_axon(n_obj_exc_1)

neurd.axon_utils.axon_classification_inhibitory(neuron_obj, min_distance_from_soma_dendr_on_axon=None, ais_new_width_min=None, **kwargs)[source]
neurd.axon_utils.axon_classification_using_synapses(neuron_obj, axon_soma_angle_threshold=None, ais_syn_density_max=None, ais_syn_alternative_max=None, ais_n_syn_pre_max=None, ais_width_min=None, ais_width_max=None, max_search_distance=None, min_skeletal_length=None, plot_filt_branches_without_postsyn_req=False, n_postsyn_max=None, postsyn_distance=None, plot_low_postsyn_branches=False, ais_width_filter=None, ais_new_width_min=None, ais_new_width_downstream_skeletal_length=None, ais_max_distance_from_soma=None, n_synapses_spine_offset_endpoint_upstream_max=None, attempt_second_pass=None, ais_syn_density_max_backup=None, ais_n_syn_pre_max_backup=None, max_search_distance_addition_backup=None, return_best_candidate=None, best_candidate_method=None, max_skeletal_length_above_threshold_and_buffer_soma_ranges=[10000, 25000, 50000, 75000, inf], max_skeletal_length_min=None, max_skeletal_length_buffer=None, significant_lowest_density_min_skeletal_length=None, lowest_density_ratio=None, backup_best_candidate_method=['significant_lowest_density', 'max_skeletal_length'], plot_final_axon=False, clean_prior_axon_labels=True, set_axon_labels=True, label_merge_errors=True, min_distance_from_soma_dendr_on_axon=None, plot_axon_on_dendrite=False, plot_dendrite_on_axon=False, return_axon_angle_info=True, downstream_distance_for_axon_angle=None, verbose=False, axon_classification_without_synapses_if_no_candidate=None, axon_classification_without_synapses=None, candidate_downstream_postsyn_density_max=None)[source]

Purpose: To find the axon limb branch for a generic neuron

Pseudocode:

Phase 1: Filtering 0) Optionally restrict limbs by the connection of the soma 1) Do a query to find branches that - low synapse desnity - min width - have min distance to the soma 2) Restrict the branches to only those without a lot of downstream postsyns in the near vicitnity

Phase 2: Gathering into Candidates

Phase 3: Picking winning Candidate

Things to improve: Can think about looking at insignificant limbs for axon

Ex: from neurd import axon_utils as au axon_limb_branch_dict,axon_angles_dict = au.axon_classification_using_synapses(neuron_obj_exc_syn_sp,

plot_filt_branches_without_postsyn_req = False, plot_low_postsyn_branches = False,

plot_final_axon=True, verbose = True)

neurd.axon_utils.axon_classification_without_synapses(neuron_obj, plot_axon_like_segments=False, axon_soma_angle_threshold=None, ais_max_search_distance=None, plot_candidates=False, plot_final_axon=False, verbose=False, axon_verbose=True, return_candidate=False, **kwargs)[source]
neurd.axon_utils.axon_features_from_neuron_obj(neuron_obj, add_axon_prefix_to_all_keys=True, features_to_exclude=(), add_axon_start_features=True, **kwargs)[source]
neurd.axon_utils.axon_like_segments(neuron_obj, include_ais=False, filter_away_end_false_positives=True, visualize_at_end=False, width_to_use=None, verbose=False)[source]
neurd.axon_utils.axon_limb_branch_dict(neuron_obj, **kwargs)[source]
neurd.axon_utils.axon_on_dendrite_limb_branch_dict(neuron_obj, **kwargs)[source]
neurd.axon_utils.axon_spines_limb_branch_dict(neuron_obj, ray_trace_min=None, ray_trace_max=None, skeletal_length_min=None, skeletal_length_max=None, n_synapses_pre_min=None, n_synapses_pre_max=None, n_faces_min=None, downstream_upstream_dist_diff=None, downstream_dist_min_over_syn=None, plot_short_end_nodes_with_syn=False, plot_axon_spines_branch_dict=False, exclude_starting_nodes=None, verbose=False)[source]

Purpose: To identify all of the boutons that sprout off that should not cause a high order degree

Brainstorming: end_node has one or two synapses between certain length: 1000 - 5000

85% ray trace: above 270 (ray_trace_perc)

Ex: from neurd import axon_utils as au au.axon_spines_limb_branch_dict(neuron_obj,

ray_trace_min = 270, ray_trace_max = 1200, skeletal_length_min = 1000, skeletal_length_max = 10000, n_synapses_pre_min = 1, n_synapses_pre_max = 3, n_faces_min = 150, plot_short_end_nodes_with_syn = False, plot_axon_spines_branch_dict = False, exclude_starting_nodes = True, verbose = False, )

neurd.axon_utils.axon_start_distance_from_soma(neuron_obj, default_value=None, verbose=False)[source]

Purpose: To find the distance of the start of an axon from the soma

Pseudocode: 1) Find the most upstream branch of the axon limb branch 2) Find the distance of that branch from the soma

neurd.axon_utils.axon_width(branch_obj, width_name='no_bouton_median', width_name_backup='no_spine_median_mesh_center', width_name_backup_2='median_mesh_center')[source]

Computes the widht of the branch (specifically in the axon case)

Ex: branch_obj = neuron_obj[“L6”][0] au.axon_width(branch_obj)

nviz.visualize_neuron(neuron_obj,

limb_branch_dict=dict(L6=[0]), mesh_color=”red”, mesh_whole_neuron=True)

neurd.axon_utils.bouton_meshes(mesh, clusters=5, smoothness=0.1, plot_segmentation=False, filter_away_end_meshes=True, cdf_threshold=0.2, plot_boutons=False, verbose=False, skeleton=None, min_size_threshold=None, max_size_threshold=None, size_type='faces', plot_boutons_after_size_threshold=False, ray_trace_filter='ray_trace_percentile', ray_trace_percentile=70, ray_trace_threshold=None, return_non_boutons=False, exclude_end_meshes_from_non_boutons=True, return_cdf_widths=False, end_mesh_method='endpoint_radius', endpoint_radius_threshold=1000, skeletal_length_max=2200)[source]
neurd.axon_utils.calculate_axon_webbing(neuron_obj, n_downstream_targets_threshold=2, width_threshold=inf, width_name='no_bouton_median', width_name_backup='no_spine_median_mesh_center', idx_to_plot=None, plot_intersection_mesh=False, plot_intersection_mesh_without_boutons=False, split_significance_threshold=None, plot_split=False, plot_split_closest_mesh=False, plot_segmentation_before_web=False, plot_web=False, plot_webbing_on_neuron=False, verbose=False, upstream_node_color='red', downstream_node_color='aqua')[source]

Purpose: To compute the webbing meshes for a neuron object that stores these meshes in the upstream node

Pseudocode: 1) Identify all nodes that have a specific amount of downstream nodes or a minimum number and a certain width

2) For each Node to check generate the webbing mesh a. find the downstream nodes and generate the mesh of combining upstream and downstream nodes b. Find skeleton points around the intersection c. Split the mesh to only include central part (after filtering away boutons) d. Running mesh segmentation to find webbing e. Saving the webbing and and webbing cdf in the branch object

neurd.axon_utils.calculate_axon_webbing_on_branch(neuron_obj, limb_idx, branch_idx, allow_plotting=True, plot_intersection_mesh=False, plot_intersection_mesh_without_boutons=False, split_significance_threshold=None, plot_split=False, plot_split_closest_mesh=False, plot_segmentation_before_web=False, plot_web=False, verbose=False, upstream_node_color='red', downstream_node_color='aqua', maximum_volume_threshold=None, minimum_volume_threshold=None, smoothness=0.08, clusters=7)[source]

Purpose: If branch has been designated to be searched for a webbing, then run the webbing finding algorithm

neurd.axon_utils.calculate_boutons(neuron_obj, max_bouton_width_to_check=None, plot_axon_branches_to_check=False, width_name='no_bouton_median', old_width_name='no_spine_median_mesh_center', plot_boutons=False, verbose=False, **kwargs)[source]

Purpose: To find boutons on axon branches and then to save off the meshes and the widths without the boutons

Pseudocode: 1) Restrict the axon to only those branches that should be checked for boutons based on their width 2) Compute the Boutons for the restricted axon branches 3) Plot the boutons if requested

neurd.axon_utils.calculate_boutons_over_limb_branch_dict(neuron_obj, limb_branch_dict, width_name='no_bouton_median', old_width_name='no_spine_median_mesh_center', calculate_bouton_cdfs=True, catch_bouton_errors=False, verbose=False)[source]

Psuedocode: Iterate through all of the branch objects and compute the bouton meshes and store as boutons

neurd.axon_utils.complete_axon_processing(neuron_obj, cell_type=None, add_synapses_and_head_neck_shaft_spines=True, validation=False, perform_axon_classification=True, plot_initial_axon=False, rotation_function=None, unrotation_function=None, label_merge_errors=True, plot_axon_on_dendrite=False, plot_dendrite_on_axon=False, plot_high_fidelity_axon=False, plot_boutons_web=False, add_synapses_after_high_fidelity_axon=True, verbose=False, add_axon_description=True, return_filtering_info=True, return_axon_angle_info=True, filter_dendrite_on_axon=True, neuron_simplification=True, return_G_axon_labeled=False, original_mesh=None, **kwargs)[source]

To run the following axon classification processes 1) Initial axon classification 2) Filtering away dendrite on axon merges 3) High fidelity axon skeletonization 4) Bouton Identification 5) Webbing Identification

neurd.axon_utils.complete_axon_processing_old(neuron_obj, perform_axon_classification=True, plot_high_fidelity_axon=False, plot_boutons_web=False, verbose=False, add_axon_description=True, return_filtering_info=True, **kwargs)[source]

To run the following axon classification processes 1) Initial axon classification 2) Filtering away dendrite on axon merges 3) High fidelity axon skeletonization 4) Bouton Identification 5) Webbing Identification

neurd.axon_utils.compute_axon_on_dendrite_limb_branch_dict(neuron_obj, width_max=None, n_spines_max=None, n_synapses_post_spine_max=None, n_synapses_pre_min=None, synapse_pre_perc_min=None, synapse_pre_perc_downstream_min=None, axon_skeletal_legnth_min=None, skeletal_length_downstream_min=None, filter_away_thin_branches=None, dendrite_width_min=None, thin_axon_skeletal_length_min=None, thin_axon_n_synapses_post_downstream_max=None, filter_away_myelination=None, mesh_area_min=None, closest_mesh_skeleton_dist_max=None, plot_axon_on_dendrite=False, set_axon_labels=True, clean_prior_labels=True, prevent_downstream_axon=True, verbose=False)[source]

Purpose: To find the dendritic branches that are axon-like

Pseudocode: 1) Query for dendritic branhes that are a. thin b. Have at least one presyn c. have a high pre_percentage

2) Restrict the query to only those with a high pre percentage

neurd.axon_utils.compute_dendrite_on_axon_limb_branch_dict(neuron_obj, min_distance_from_soma=None, n_synapses_pre_min=None, synapse_post_perc_min=None, dendrite_width_min=None, dendrite_skeletal_length_min=None, spine_density_min=None, plot_dendr_like_axon=False, coarse_dendrite_filter=None, coarse_dendrite_axon_width_min=None, coarse_dendrite_synapse_post_perc_min=None, coarse_dendrite_n_synapses_post_min=None, coarse_dendrite_n_spines_min=None, coarse_dendrtie_spine_density=None, add_low_branch_cluster_filter=False, plot_low_branch_cluster_filter=False, synapse_post_perc_downstream_min=None, n_synapses_pre_downstream_max=None, filter_away_spiney_branches=None, n_synapses_post_spine_max=None, spine_density_max=None, plot_spiney_branches=False, plot_final_dendrite_on_axon=False, set_axon_labels=True, clean_prior_labels=True, verbose=False)[source]

Purpose: To find the axon branches that are dendritic-like

Previous things used to find dendrites:

dendritic_merge_on_axon_query=None, dendrite_merge_skeletal_length_min = 20000, dendrite_merge_width_min = 100, dendritie_spine_density_min = 0.00015,

dendritic_merge_on_axon_query = (f”labels_restriction == True and “

f”(median_mesh_center > {dendrite_merge_width_min}) and ” f”(skeletal_length > {dendrite_merge_skeletal_length_min}) and ” f”(spine_density) > {dendritie_spine_density_min}”)

Pseudocode: 1) Filter away the safe postsyn group - thick - has a postsyn

neurd.axon_utils.compute_dendrite_on_axon_limb_branch_dict_excitatory(neuron_obj, min_distance_from_soma=None, **kwargs)[source]
neurd.axon_utils.compute_dendrite_on_axon_limb_branch_dict_inhibitory(neuron_obj, min_distance_from_soma=None, **kwargs)[source]
neurd.axon_utils.dendrite_limb_branch_dict(neuron_obj, **kwargs)[source]
neurd.axon_utils.dendrite_on_axon_limb_branch_dict(neuron_obj, **kwargs)[source]
neurd.axon_utils.filter_axon_limb_false_positive_end_nodes(curr_limb, curr_limb_axon_like_nodes, verbose=False, skeleton_length_threshold=30000)[source]

Purpose: Will remove end nodes that were accidentally mistaken as axons

neurd.axon_utils.filter_axon_neuron_false_positive_end_nodes(neuron_obj, current_axons_dict)[source]
neurd.axon_utils.filter_candidate_branches_by_downstream_postsyn(neuron_obj, candidates, max_search_distance=80000, max_search_distance_downstream=50000, skeletal_length_min_downstream=15000, postsyn_density_max=0.00015, filter_away_empty_candidates=True, verbose=False)[source]

Purpose: Need to refine the candidates so that they don’t extend too far up because then the propogation down will be bad - want to only to apply to branches close to the soma

Psueodocode: For branches with an endpoint closer than the max split distance 1) Find the children 2) find allb ranches within a certain downstream distance of of children 3) If have a one of them has a significant skeleton that have any axon, then remove the branch from the candidate

neurd.axon_utils.filter_candidates_away_with_downstream_high_postsyn_branches_NOT_USED(neuron_obj, axon_candidates, plot_ais_skeleton_restriction=False, max_search_skeletal_length=100000, skeletal_length_min=15000, postsyn_density_max=0.0015, verbose=True)[source]

Purpose: To outrule a candidate because if the postsyn density is high far from the ais then probably not the right candidate

Pseudocode: 1) get the branches farther than AIS distance For each candidate 2) Filter the candidates by the limb branch to get remaining branches 3) Decide if enough skeletal length to work with (if not then continue) 4) Compute the postsynaptic density 5) if postsynaptic density is too high then don’t add to final candidates

Ex: au.filter_candidates_away_with_downstream_high_postsyn_branches_NOT_USED( neuron_obj, axon_candidates = [{‘limb_idx’: ‘L0’, ‘start_node’: 5, ‘branches’: [5]}, {‘limb_idx’: ‘L0’, ‘start_node’: 14, ‘branches’: [14]}] ,verbose = True)

neurd.axon_utils.myelination_limb_branch_dict(neuron_obj, min_skeletal_length=None, max_synapse_density=None, max_synapse_density_pass_2=None, min_skeletal_length_pass_2=None, max_width=None, min_distance_from_soma=None, min_distance_from_soma_pass_2=None, limb_branch_dict_restriction=None, skeletal_length_downstream_min=None, n_synapses_post_downstream_max=None, verbose=False, plot=False)[source]

Purpose: To find the parts of the axon that are myelinated with low postsyn and low width

neurd.axon_utils.short_thick_branches_limb_branch_dict(neuron_obj, width_min_threshold=None, skeletal_length_max_threshold=None, ray_trace_threshold=None, parent_width_threshold=None, plot_limb_branch_dict=False, exclude_starting_nodes=None, add_zero_width_segments=None, width_min_threshold_parent=None, width_global_min_threshold_parent=None, verbose=False, only_axon=True)[source]

Purpose: Identify short thick branches a neuron object (excluding the starter node)

Application: Can be filtered away from high_degree_coordinate resolution for error detection

Pseudocode: 1) Query the limb or neuron using the - width threshold - skeletal length threshold - end node threshold (can exclude the starting node)

neurd.axon_utils.valid_web_for_t(mesh, size_threshold=120, size_type='ray_trace_median', above_threshold=True, verbose=False)[source]

Will return if the mesh is a valid

neurd.axon_utils.wide_angle_t_candidates(neuron_obj, axon_only=True, child_width_maximum=75, parent_width_maximum=75, plot_two_downstream_thin_axon_limb_branch=False, plot_wide_angled_children=False, child_skeletal_threshold=10000, verbose=True)[source]

To find all of the nodes that thin and wide angle t splits in the neuron

Application: Can be used to identify merge errors when there is not a valid web mesh at the location

neurd.branch_attr_utils module

Purpose: For help with manipulating and calculating qualities of objects stored on branches - spines - synapses

neurd.branch_attr_utils.calculate_branch_attr_soma_distances_on_limb(limb_obj, branch_attr, calculate_endpoints_dist_if_empty=True, verbose=False)[source]

Purpose: To store the distances to the soma for all of the synapses

Computing the upstream soma distance for each branch 1) calculate the upstream distance 2) Calcualte the upstream endpoint

For each synapse: 3) Soma distance = endpoint_dist

Ex: calculate_limb_synapse_soma_distances(limb_obj = neuron_obj[2], verbose = True)

neurd.branch_attr_utils.calculate_endpoints_dist(branch_obj, attr_obj)[source]

Purpose: Will calculate the endpoint distance for a attribute

neurd.branch_attr_utils.calculate_neuron_soma_distance(neuron_obj, branch_attr, verbose=False, **kwargs)[source]

Purpose: To calculate all of the soma distances for all objects on branches in a neuron

Ex: calculate_neuron_soma_distance(neuron_obj,

verbose = True)

neurd.branch_attr_utils.calculate_neuron_soma_distance_euclidean(neuron_obj, branch_attr, verbose=False)[source]

Purpose: To calculate all of the soma distances for all the valid synapses on limbs

Ex: calculate_neuron_soma_distance(neuron_obj,

verbose = True)

neurd.branch_attr_utils.calculate_upstream_downstream_dist(limb_obj, branch_idx, attr_obj)[source]
neurd.branch_attr_utils.calculate_upstream_downstream_dist_from_down_idx(attr_obj, down_idx)[source]
neurd.branch_attr_utils.calculate_upstream_downstream_dist_from_up_idx(attr_obj, up_idx)[source]
neurd.branch_attr_utils.set_limb_branch_idx_to_attr(neuron_obj, branch_attr)[source]

Purpose: Will add limb and branch indexes for all synapses in a Neuron object

neurd.branch_utils module

neurd.branch_utils.add_jitter_to_endpoint(branch_obj, endpoint, jitter=2, verbose=False)[source]

Purpose: to add jitter to a branches coordinate to move it by a certain amount (and pass back the branch moved)

Pseudocode: 1) Create jitter segment 2) Adjust the branch skeleton 3) Pass back the jitter segment

neurd.branch_utils.branch_dynamics_attr_dict_dynamics_from_node(branch_obj, width_name='no_spine_median_mesh_center')[source]

Purpose: To save off all of the necessary information for branch dynamics (of spines,width,synapses) to

neurd.branch_utils.closest_mesh_skeleton_dist(obj, verbose=False)[source]

Purpose: To find the closest distance between mesh and the skeleton of a branch

neurd.branch_utils.combine_attr_lists(list_1, list_2, verbose=False)[source]
neurd.branch_utils.combine_branches(branch_upstream, branch_downstream, add_skeleton=True, add_labels=False, verbose=True, common_endpoint=None, return_jitter_segment=False)[source]

Purpose: To combine two branch objects together WHERE IT IS ASSUMED THEY SHARE ONE COMMON ENDPOINT

Ex: from neurd import branch_utils as bu

branch_upstream = copy.deepcopy(neuron_obj[0][upstream_branch]) branch_downstream= copy.deepcopy(neuron_obj[0][downstream_branch])

branch_upstream.labels = [“hellow”] branch_downstream.labels = [“my”,”new”,”labels”]

b_out = bu.combine_branches(

branch_upstream, branch_downstream, verbose = True, add_skeleton = False, add_labels = False

)

neurd.branch_utils.endpoint_downstream_idx(branch_obj, coordinate=None)[source]
neurd.branch_utils.endpoint_downstream_with_offset(branch_obj, offset=1000, plot=False, verbose=False)[source]
neurd.branch_utils.endpoint_type_with_offset(branch_obj, endpoint_type='upstream', offset=1000, plot=False, verbose=False)[source]

Purpose: To get the skeleton point a little offset from the current endpoint

neurd.branch_utils.endpoint_upstream_idx(branch_obj, coordinate=None)[source]
neurd.branch_utils.endpoint_upstream_with_offset(branch_obj, offset=1000, plot=False, verbose=False)[source]

Ex: bu.endpoint_upstream_with_offset(

branch_obj = limb_obj[26], verbose = True, offset = 200, plot = True

)

neurd.branch_utils.is_skeleton_upstream_to_downstream(branch_obj, verbose=False)[source]
neurd.branch_utils.mesh_shaft(obj, plot=False, return_mesh=True)[source]

Purpose: To export the shaft mesh of the branch (aka the mesh without the spine meshes)

neurd.branch_utils.mesh_shaft_idx(obj, plot=False)[source]
neurd.branch_utils.min_dist_synapse_endpoint(branch_obj, synapse_type, endpoint_type, verbose=False, default_value=inf)[source]
neurd.branch_utils.min_dist_synapses_post_downstream(branch_obj, **kwargs)[source]
neurd.branch_utils.min_dist_synapses_post_upstream(branch_obj, **kwargs)[source]
neurd.branch_utils.min_dist_synapses_pre_downstream(branch_obj, **kwargs)[source]
neurd.branch_utils.min_dist_synapses_pre_upstream(branch_obj, **kwargs)[source]
neurd.branch_utils.refine_width_array_to_match_skeletal_coordinates(neuron_obj, verbose=False)[source]

Purpose: To update the widths of those that don’t match the skeletal coordinates

neurd.branch_utils.set_branch_attr_on_limb(limb_obj, func, attr_name, branch_idxs=None, **kwargs)[source]

Purpose: To set the upstream and downstream order of the endpoints of a branch in a limb

neurd.branch_utils.set_branch_attr_on_limb_on_neuron(neuron_obj, func, attr_name, verbose=False, **kwargs)[source]

Purpose: To set the upstream and downstream order of the endpoints of a branch in a limb

neurd.branch_utils.set_branches_endpoints_upstream_downstream_idx(neuron_obj, **kwargs)[source]
neurd.branch_utils.set_branches_endpoints_upstream_downstream_idx_on_limb(limb_obj, **kwargs)[source]
neurd.branch_utils.set_endpoints_upstream_downstream_idx_from_upstream_coordinate(branch_obj, upstream_coordinate=None, up_idx=None)[source]

Purpose: Set the branch upstream, downstream by a coordinate

neurd.branch_utils.set_endpoints_upstream_downstream_idx_on_branch(limb_obj, branch_idx)[source]
neurd.branch_utils.skeletal_coordinates_dist_upstream_to_downstream(branch_obj, verbose=False, cumsum=True, skeleton=None, **kwargs)[source]
neurd.branch_utils.skeletal_coordinates_upstream_to_downstream(branch_obj, verbose=False, skeleton=None, coordinate_dists=None, resize=True)[source]
neurd.branch_utils.skeleton_adjust(branch_obj, skeleton=None, skeleton_append=None)[source]

Purpose: To adjust the skeleton of a branch and then have the endpoints readjusted

Pseudocode: 1) Adjust skeleton (by stacking or reassingning) 2) recalculate the new endpoints 3) pass back the branch

neurd.branch_utils.skeleton_angle_from_top(branch_obj, top_of_layer_vector=None)[source]
neurd.branch_utils.skeleton_vector_downstream(branch_obj, directional_flow='downstream', endpoint_coordinate=None, verbose=False, plot_restricted_skeleton=False, **kwargs)[source]
neurd.branch_utils.skeleton_vector_endpoint(branch_obj, endpoint_type, directional_flow='downstream', endpoint_coordinate=None, verbose=False, plot_restricted_skeleton=False, offset=500, comparison_distance=3000, **kwargs)[source]

Purpose: To restrict a skeleton to its upstream or downstream vector ()

The vector is always in the direction of most upstream skeleton point to downstream skeletal point

Example: bu.skeleton_vector_endpoint( branch_obj, endpoint_type = “downstream”, #endpoint_coordinate = np.array([2504610. , 480431. , 33741.2]) plot_restricted_skeleton = True, verbose = True, )

neurd.branch_utils.skeleton_vector_upstream(branch_obj, directional_flow='downstream', endpoint_coordinate=None, verbose=False, plot_restricted_skeleton=False, **kwargs)[source]
neurd.branch_utils.width_array_skeletal_lengths_upstream_to_downstream(branch_obj, verbose=False)[source]
neurd.branch_utils.width_array_upstream_to_downstream(branch_obj, verbose=False)[source]
neurd.branch_utils.width_array_upstream_to_dowstream_with_skeletal_points(branch_obj, width_name='no_spine_median_mesh_center')[source]

Purpose: Want to get the width at a certain point on the branch where that certain point is the closest distcretization to another coordinate

neurd.branch_utils.width_array_value_closest_to_coordinate(branch_obj, coordinate, verbose=False)[source]

Purpose: To find the width closest to certain coordinates on a branch obj

neurd.branch_utils.width_downstream(branch_obj, **kwargs)[source]
neurd.branch_utils.width_endpoint(branch_obj, endpoint, offset=0, comparison_distance=2000, skeleton_segment_size=1000, verbose=False)[source]

Purpose: To compute the width of a branch around a comparison distance and offset of an endpoint on it’s skeleton

neurd.branch_utils.width_upstream(branch_obj, **kwargs)[source]

neurd.cave_client_utils module

class neurd.cave_client_utils.CaveInterface(release_name=None, env_filepath=None, cave_token=None, client=None, release=None)[source]

Bases: CaveInterface

get_table(client)
get_tables()
load_cave_token()
mesh_from_seg_id(client, use_https=True, progress=False, return_trimesh=True, verbose=False)

Purpose: To fetch a mesh from the cave table using cloudvolume

example_cell_id = 864691137197197121

neuron_nucleus_df(neuron_non_neuron_table_name=None, verbose=False)
postsyn_df_from_seg_id(client, verbose=False)
pre_post_df_from_seg_id(client, concat=True, verbose=False)

Example: seg_id = 864691137197197121

presyn_df_from_seg_id(client, verbose=False)
segment_ids_with_nucleus(neuron_non_neuron_table_name=None, verbose=False)
set_cave_auth(cave_token=None, env_filepath=None, set_global_token=True, **kwargs)
synapse_df_from_seg_id(client, verbose=False, voxel_to_nm_scaling=None)
table_size(client, table_name)
neurd.cave_client_utils.get_table(table_name, client)[source]
neurd.cave_client_utils.get_tables(client)[source]
neurd.cave_client_utils.init_cave_client(release_name=None, env_filepath=None, cave_token=None, release=None)[source]
neurd.cave_client_utils.load_cave_token(env_filepath=None)[source]
neurd.cave_client_utils.mesh_from_seg_id(seg_id, client, use_https=True, progress=False, return_trimesh=True, verbose=False)[source]

Purpose: To fetch a mesh from the cave table using cloudvolume

example_cell_id = 864691137197197121

neurd.cave_client_utils.neuron_nucleus_df(client, neuron_non_neuron_table_name=None, verbose=False)[source]
neurd.cave_client_utils.postsyn_df_from_seg_id(seg_id, client, verbose=False)[source]
neurd.cave_client_utils.pre_post_df_from_seg_id(seg_id, client, concat=True, verbose=False)[source]

Example: seg_id = 864691137197197121

neurd.cave_client_utils.prepost_syn_df_from_cave_syn_df(syn_df, seg_id, columns=('segment_id', 'segment_id_secondary', 'synapse_id', 'prepost', 'synapse_x', 'synapse_y', 'synapse_z', 'synapse_size'), voxel_to_nm_scaling=None)[source]

Purpose: Want to reformat the synapse dataframe from the CAVE table to the standard synapse format

— old columns — “pre_pt_root_id” “post_pt_root_id” “size” “id” “prepost”

— new columns — segment_id, segment_id_secondary, synapse_id, prepost, synapse_x, synapse_y, synapse_z, synapse_size,

ctr_pt_position

Pseudocode: For presyn/postsyn: 1) Restrict the dataframe to the current segment_id

Example: from neurd import cave_client_utils as ccu ccu.prepost_syn_df_from_cave_syn_df(

syn_df, seg_id=seg_id,

)

neurd.cave_client_utils.presyn_df_from_seg_id(seg_id, client, verbose=False)[source]
neurd.cave_client_utils.release_from_release_name(release_name)[source]
neurd.cave_client_utils.release_name_from_client(client)[source]
neurd.cave_client_utils.release_name_from_release(release, prefix='minnie65_public')[source]
neurd.cave_client_utils.save_cave_token_to_cloudvolume_secrets(cave_token, cloudvolume_secrets_path=None, **kwargs)[source]
neurd.cave_client_utils.segment_ids_with_nucleus(client, neuron_non_neuron_table_name=None, verbose=False)[source]
neurd.cave_client_utils.set_cave_auth(client=None, cave_token=None, env_filepath=None, set_global_token=True, **kwargs)[source]
neurd.cave_client_utils.set_global_table_names(release_name=None, verbose=False)[source]
neurd.cave_client_utils.synapse_df_from_seg_id(seg_id, client, verbose=False, voxel_to_nm_scaling=None)[source]
neurd.cave_client_utils.table_dict_from_release_name(release_name, return_default=True)[source]
neurd.cave_client_utils.table_name_from_table_str(table_str)[source]
neurd.cave_client_utils.table_size(self, client, table_name)[source]

neurd.cave_interface module

neurd.cell_type_utils module

Interesting website for cell types: http://celltypes.brain-map.org/experiment/morphology/474626527

neurd.cell_type_utils.accuracy_df_by_cell_type_fine(df, verbose=False, cell_type_fine_label='cell_type_fine', cell_type_coarse_label='cell_type_coarse', add_overall=True, e_i_methods=['allen', 'bcm', 'bcm_gnn'])[source]
neurd.cell_type_utils.all_training_df(plot=False)[source]
neurd.cell_type_utils.border_training_df()[source]
neurd.cell_type_utils.cell_type_fine_classifier_map_derived(cell_type_dict=None, e_i_type=None, e_i_labels=False, default_value=None, cell_type_dict_extended=None)[source]
neurd.cell_type_utils.cell_type_fine_for_clustering_inh = ['bc', 'Martinotti', 'sst', 'VIP', 'ndnf+npy-', 'Pvalb', 'bpc', 'ngfc']

Location where the allen labels are ManualCellTypesAllen() & “table_name != ‘allen_v1_column_types_slanted’”

neurd.cell_type_utils.cell_type_fine_mapping_publishable(df, column='gnn_cell_type_fine', dict_map=None)[source]
neurd.cell_type_utils.cell_type_fine_name_cleaner(row)[source]
neurd.cell_type_utils.cell_type_from_feature_df(segment_id)[source]
neurd.cell_type_utils.cell_type_high_probability_df_from_df(df, cell_type, baylor_exc_prob_threshold=0.65, gnn_prob_threshold=0.65, verbose=False, return_df=False)[source]

Purpose: Table after proofreading of cells that are highly likely to be excitatory

neurd.cell_type_utils.cell_type_names_str(print_ei=True, verbose=False, separator=' : ')[source]
neurd.cell_type_utils.classes_from_cell_type_name(cell_type_name)[source]

Purpose: Returns a list to iterate over depending on the name of the cell type

cell_type_predicted ==>

neurd.cell_type_utils.clean_cell_type_fine(df)[source]
neurd.cell_type_utils.coarse_cell_type_from_coarse(cell_type)[source]
neurd.cell_type_utils.coarse_cell_type_from_fine(cell_type)[source]
neurd.cell_type_utils.dendrite_branch_stats_near_soma(neuron_obj, limb_branch_dict=None, plot_spines_and_sk_filter=False, verbose=False, **kwargs)[source]

Purpose: To get features of the dendrite branches from an unclassified neuron

Applicaiton: Can be used to help with E/I cell typing

Pseudocode: 1) Get the branches near the soma up to certain distance

Ex: ctu.dendrite_branch_stats_near_soma(neuron_obj,)

neurd.cell_type_utils.df_cell_type_fine(df)[source]
neurd.cell_type_utils.e_i_classification_from_neuron_obj(neuron_obj, features=['syn_density_shaft', 'spine_density'], verbose=False, return_cell_type_info=False, return_dendrite_branch_stats=False, plot_on_model_map=False, apply_hand_made_low_rules=None, skeletal_length_processed_syn_min=None, skeletal_length_processed_spine_min=None, inhibitory_syn_density_shaft_min=None, plot_spines_and_sk_filter_for_syn=False, plot_spines_and_sk_filter_for_spine=False, e_i_classification_single=False, return_probability=True, **kwargs)[source]

Purpose: To take a neuron object and classify it as excitatory or inhibitory

The hand written rules moves the y intercept of the classifier from 0.372 to 0.4

neurd.cell_type_utils.e_i_classification_from_neuron_obj_old(neuron_obj, features=['syn_density_shaft', 'spine_density'], verbose=False, return_cell_type_info=False, return_dendrite_branch_stats=False, plot_on_model_map=False, apply_hand_made_low_rules=True, skeletal_length_processed_syn_min=15000, skeletal_length_processed_spine_min=15000, excitatory_spine_density_min=0.1, plot_spines_and_sk_filter_for_syn=False, plot_spines_and_sk_filter_for_spine=False, special_syn_parameters=True)[source]

Purpose: To take a neuron object and classify it as excitatory or inhibitory

neurd.cell_type_utils.e_i_classification_single(data, features=None, model=None, verbose=False, return_label_name=True, plot_on_model_map=False, return_probability=False)[source]

Right now usually done with ‘syn_density_shaft’, ‘spine_density’ but can specify other features

ctu.e_i_classification_single([0.5,0.6],

features = [“spine_density”,”syn_density_shaft”], verbose = True)

neurd.cell_type_utils.e_i_color_dict(excitatory_color='blue', inhibitory_color='red', other_color='black')[source]
neurd.cell_type_utils.e_i_label_from_cell_type_fine(cell_type, verbose=False, default_value='other')[source]
neurd.cell_type_utils.e_i_model_as_logistic_reg_on_border_df(label='cell_type_manual', features=None, class_weight={'excitatory': 1, 'inhibitory': 1.5}, plot_decision_map=False, plot_type='probability', use_handmade_params=True, **kwargs)[source]
neurd.cell_type_utils.excitatory_high_probability_df_from_df(df, baylor_exc_prob_threshold=0.65, gnn_prob_threshold=0.65, verbose=False)[source]
neurd.cell_type_utils.export_cell_type_abbreviations_csv(filepath='cell_type_table.csv', return_df=False)[source]
neurd.cell_type_utils.filter_cell_type_df_for_most_complete_duplicates(df, segment_id_name='pt_root_id')[source]

Purpose: To filter so that only one of segment id and takes the most filled out one

Pseudocode: For each unique pt_root_id 1) Find all of the rows 2) If more than 1 filter for those not None in cell_type 3a) if empty then add first of initial 3b) if not empty then add first of final

neurd.cell_type_utils.load_border_exc_inh_df(path='/home/runner/work/NEURD/NEURD/neurd/model_data/border_df_for_e_i_improved.csv', path_backup='/neurd_packages/meshAfterParty/meshAfterParty/border_df_for_e_i_improved.pbz2')[source]
neurd.cell_type_utils.load_manual_exc_inh_df(path='/home/runner/work/NEURD/NEURD/neurd/model_data/man_proof_stats_df_for_e_i.csv')[source]
neurd.cell_type_utils.map_cell_type_fine_publishable(df, column='gnn_cell_type_fine', dict_map=None)
neurd.cell_type_utils.plot_cell_type_gnn_embedding(df, column='cell_type', color_map=None, trans_cols=['umap0', 'umap1'], nucleus_ids=None, plot_unlabeld=True, unlabeled_color='grey', use_labels_as_text_to_plot=False, figure_width=7, figure_height=10, size=20, size_labeled=23, alpha=0.05, alpha_labeled=1, plot_legend=False, title='GNN Classifier - Whole Neuron\nUMAP Embedding', axes_fontsize=35, title_fontsize=35, title_append=None, xlabel=None, ylabel=None)[source]

Purpose: to plot certain embeddings

neurd.cell_type_utils.plot_classifier_map(clf=None, plot_type='probability', X=None, y=None, df=None, df_class_name=None, df_feature_names=None)[source]
neurd.cell_type_utils.plot_e_i_model_classifier_map(data_to_plot=None, **kwargs)[source]
neurd.cell_type_utils.postsyn_branches_near_soma(neuron_obj, perform_axon_classification=False, n_synapses_post_min=2, synapse_post_perc_min=0.8, plot_syn_post_filt=False, lower_width_bound=140, upper_width_bound=520, spine_threshold=2, skeletal_distance_threshold=110000, skeletal_length_threshold=15000, plot_spines_and_sk_filter=False, verbose=False)[source]

Pseudocode: 1) Do axon classification without best candidate to eliminate possible axons (filters away) 2) filter away only branches with a majority postsyns 3) apply spine and width restrictions

neurd.cell_type_utils.postsyn_branches_near_soma_for_syn_post_density(neuron_obj, plot_spines_and_sk_filter=False, spine_threshold=None, skeletal_length_threshold=None, upper_width_bound=None, **kwargs)[source]

Purpose: To restrict the branches close to the soma that will be used for postsynaptic density

Ex: from neurd import cell_type_utils as ctu

output_limb_branch = ctu.postsyn_branches_near_soma_for_syn_post_density(

neuron_obj = neuron_obj_exc_syn_sp,

verbose = True)

from neurd import neuron_visualizations as nviz nviz.plot_limb_branch_dict(neuron_obj_exc_syn_sp,

output_limb_branch)

neurd.cell_type_utils.predict_class_single_datapoint(clf, data, verbose=False, return_probability=False)[source]

Purpose: To predict the class of a single datapoint

Ex: data = [1,1] mlu.predict_class_single_datapoint(clf,data,verbose = True)

neurd.cell_type_utils.rename_cell_type_fine_column(df, column='gnn_cell_type_fine', keep_classes_exc=None, keep_classes_inh=None, in_place=False)[source]
neurd.cell_type_utils.rename_dict_for_cell_type_fine(keep_classes_inh=None, keep_classes_exc=None, default_name_inh='Other Inh', default_name_exc='Other Exc', verbose=False)[source]

Purpose: Generate a renaming dictionary based on which exc and inhibitory classes you want to retain their name and which you want to group into a default name

neurd.cell_type_utils.set_e_i_model(features=['syn_density_shaft', 'spine_density'], label='cell_type', add_features_to_model_obj=True, model_type='logistic_reg', plot_map=False, return_new_model=False, **kwargs)[source]

Purpose: To set the module e/i classifier

Ex: How to specify different features for the classification ctu.set_e_i_model(plot_map = True,

features= [“spine_density”,”syn_density_shaft”])

neurd.cell_type_utils.set_e_i_model_as_kNN(X, y, n_neighbors=5, plot_map=False, **kwargs)[source]

To create the kNN

neurd.cell_type_utils.soma_stats_for_cell_type(neuron_obj)[source]

Stats we want to include about the soma to maybe help cell type

surface_area volume sa_to_volume ray_trace_percentile_70 n_syn_soma

neurd.cell_type_utils.spine_density_near_soma(neuron_obj, limb_branch_dict=None, verbose=True, multiplier=1000, return_skeletal_length=True, lower_width_bound=None, upper_width_bound=None, **kwargs)[source]

Purpose: To compute the spine density over branches near the soma

Application: To be used for cell classification

Ex: ctu.spine_density_near_soma(neuron_obj = neuron_obj_exc_syn_sp,

verbose = True, multiplier = 1000)

neurd.cell_type_utils.synapse_density_near_soma(neuron_obj, limb_branch_dict=None, synapse_type='synapses', verbose=False, multiplier=1000, return_skeletal_length=False, plot_spines_and_sk_filter=False, **kwargs)[source]

Application: To be used for cell type (E/I) classification

neurd.cell_type_utils.synapse_density_stats(neuron_obj, verbose=False, return_skeletal_length=True, **kwargs)[source]

Purpose To compute synapse densities that could be used for E/I classification

neurd.classification_utils module

Utils for helping with the classification of a neuron for compartments like axon, apical, basal…

neurd.classification_utils.apical_branch_candidates_on_limb(limb_obj, apical_check_distance_max=90000, apical_check_distance_min=25000, plot_restricted_skeleton=False, plot_restricted_skeleton_with_endnodes=False, angle_threshold=30, top_volume_vector=array([0, -1, 0]), spine_density_threshold=1e-05, total_skeleton_distance_threshold_multiplier=0.5, apical_width_threshold=240, upward_distance_to_skeletal_distance_ratio_threshold=0.85, verbose=False, **kwargs)[source]

Purpose: To identify the branches on the limb that are most likely part of a large upward apical branch

Psuedoode: 0a) Getting the subskeleton region to analyze 0b) Divided the Restricted Skeleton into components to analyze

For each connected component 1) Get all the end nodes of the subgraph 2) Subtract of the closest subgraph node to limb start For each end node 3) Look at the vector between end nodes and closest node

(continue if not approximately straight up) and not long enough

  1. Find the branches that contain the two ends of the path

For all combinations of branches:

  1. Find the shortest path between the two branches on the context network

6) Get the subskeleton: - Analyze for width and spine density (and if too thin or not spiny enough then continue) 7) If passed all tests then add the branch path as possible candidate

neurd.classification_utils.apical_classification(neuron_obj, skip_splitting=True, apical_soma_angle_threshold=40, plot_viable_limbs=False, label_neuron_branches=True, plot_apical=True, verbose=False, **kwargs)[source]

Will compute a limb branch dict of all the branches that are part of a probably long reaching apical branch

Pseudocode: 1) Split the neuron and take the first neuron obj (assume only some in neuron) 2) Check only 1 soma 3) Filter the limbs for viable aplical limbs based on the soma angle 4) Iterate through the viable limbs to find the apical branches on each limb

Ex: apical_classification(neuron_obj,

apical_soma_angle_threshold=40, plot_viable_limbs = False, label_neuron_branches=True, plot_apical=True, verbose=False)

neurd.classification_utils.axon_candidates(neuron_obj, possible_axon_limbs=None, ais_threshold=20000, plot_close_branches=False, plot_candidats_after_elimination=False, plot_candidates_after_adding_back=False, verbose=False, **kwargs)[source]

Purpose: To return with a list of the possible axon subgraphs of the limbs of a neuron object

Pseudocode: 1) Find all the branches in the possible ais range and delete them from the concept networks 2) Collect all the leftover branches subgraph as candidates 3) Add back the candidates that were deleted 4) Combining all the candidates in one list

neurd.classification_utils.axon_classification(neuron_obj, error_on_multi_soma=True, ais_threshold=14000, downstream_face_threshold=3000, width_match_threshold=50, plot_axon_like_segments=False, axon_soma_angle_threshold=70, plot_candidates=False, plot_axons=False, plot_axon_errors=False, axon_angle_threshold_relaxed=95, axon_angle_threshold=120, add_axon_labels=True, clean_prior_axon_labels=True, label_axon_errors=True, error_width_max=140, error_length_min=None, return_axon_labels=True, return_axon_angles=False, return_error_labels=True, best_axon=True, no_dendritic_branches_off_axon=True, verbose=False, **kwargs)[source]

Purpose: To put the whole axon classificatoin steps together into one function that will labels branches as axon-like, axon and error

  1. Classify All Axon-Like Segments

  2. Filter Limbs By Starting Angle

  3. Get all of the Viable Candidates

  4. Filter Candidates

  5. Apply Labels

neurd.classification_utils.axon_faces_from_labels_on_original_mesh(neuron_obj, original_mesh=None, original_mesh_kdtree=None, plot_axon=False, verbose=False, **kwargs)[source]

Purpose: To get the axon face indices on the original mesh

Pseudocode: 1) Get the original mesh if not passed 2) Get the axon mesh of the neuron object 3) Map the axon mesh to the original mesh

Ex: clu.axon_faces_from_labels_on_original_mesh(neuron_obj,

plot_axon=True,

verbose=True, original_mesh=original_mesh, original_mesh_kdtree=original_mesh_kdtree)

neurd.classification_utils.axon_like_limb_branch_dict(neuron_obj, downstream_face_threshold=3000, width_match_threshold=50, downstream_non_axon_percentage_threshold=0.3, distance_for_downstream_check=40000, max_skeletal_length_can_flip=70000, include_ais=True, plot_axon_like=False, verbose=False)[source]
neurd.classification_utils.axon_limb_branch_dict(neuron_obj)[source]
neurd.classification_utils.axon_mesh_from_labels(neuron_obj, verbose=False, plot_axon=False)[source]

Will compile the axon mesh from the labels stored in the neuron object

Ex: clu.axon_mesh_from_labels(neuron_obj,

plot_axon=False, verbose=True)

neurd.classification_utils.axon_starting_branch(neuron_obj, axon_limb_name=None, axon_branches=None, verbose=False)[source]

Purpose: Will find the branch that is starting the axon according to the concept network

neurd.classification_utils.axon_starting_coordinate(neuron_obj, axon_limb_name=None, axon_branches=None, plot_axon_starting_endpoint=False, verbose=False)[source]

Purpose: To find the skeleton endpoint that is closest to the starting node

Pseudocode: 1) Find the axon branch that is closest to the starting node on the concept network –> if it is the starting node then just return the current starting coordinate 2) Find the endpoints of the closest branch 3) Find the endpoint that is closest to the starting coordinate along the skeleton

neurd.classification_utils.axon_width_like_query_revised(width_to_use, spine_limit, spine_density=None)[source]
neurd.classification_utils.axon_width_like_segments(current_neuron, current_query=None, current_functions_list=None, include_ais=True, verbose=False, non_ais_width=None, ais_width=None, max_n_spines=None, max_spine_density=None, width_to_use=None, plot=False)[source]

Purpose: Will get all of the branches that look like spines based on width and spine properties

neurd.classification_utils.candidate_starting_skeletal_angle(limb_obj, candidate_nodes, offset=10000, axon_sk_direction_comparison_distance=10000, buffer_for_skeleton=5000, top_volume_vector=array([0, -1, 0]), plot_skeleton_paths_before_restriction=False, plot_skeleton_paths_after_restriction=False, return_restricted_skeletons=False, branches_not_to_consider_for_end_nodes=None, verbose=False)[source]

Purpose: To get the skeleton that represents the starting skeleton –> and then find the projection angle to filter it away or not

Pseudocode: 1) convert the graph into a skeleton (this is when self touches could be a problem) 2) Find all skeleton points that are within a certain distance of the starting coordinate 3) Find all end-degree nodes (except for the start) 4) Find path back to start for all end-nodes 5) Find paths that are long enough for the offset plus test –> if none then don’t filter anyway

For each valid path (make them ordered paths): 6) Get the offset + test subskeletons for all valid paths 7) Get the angle of the sksletons vectors

neurd.classification_utils.clear_axon_labels_from_dendritic_paths_to_starter_node(limb_obj, axon_branches=None, dendritic_branches=None, verbose=False)[source]

Purpose: To make sure that no axon branches are on path of dendritic branches back to the starting node of that limb

Pseudocode: 1a) if dendritic branches are None then use axon branches to figure out 1b) If axon branches are None…. 2) If dendritic branches or axon branches are empty then just return original axon branches 3) Find starting node of branch 4) for all dendritic branches:

  1. find the shortest path back to starting node

  2. Add those nodes on path to a list to make sure is not included in axons

  1. Subtract all the non-axon list from the axon branches

  2. Return the new axon list

Ex: final_axon_branches = clu.clear_axon_labels_from_dendritic_paths_to_starter_node(limb_obj=neuron_obj[“L4”],

axon_branches=neuron_obj.axon_limb_branch_dict[“L4”], dendritic_branches=None, verbose=True)

final_axon_branches

neurd.classification_utils.contains_excitatory_apical(neuron_obj, plot_apical=False, return_n_apicals=False, **kwargs)[source]
neurd.classification_utils.contains_excitatory_axon(neuron_obj, plot_axons=False, return_axon_angles=True, return_n_axons=False, label_axon_errors=True, axon_limb_branch_dict=None, axon_angles=None, verbose=False, **kwargs)[source]
neurd.classification_utils.dendrite_branches_on_limb(neuron_obj, limb_name)[source]
neurd.classification_utils.dendrite_limb_branch_dict(neuron_obj)[source]
neurd.classification_utils.filter_axon_candiates(neuron_obj, axon_subgraph_candidates, axon_like_limb_branch_dict=None, axon_angle_threshold_relaxed=110, axon_angle_threshold=120, relaxation_percentage=0.85, relaxation_axon_length=inf, skeletal_angle_offset=10000, skeletal_angle_comparison_distance=10000, skeletal_angle_buffer=5000, min_ais_width=85, use_beginning_ais_for_width_filter=True, comparison_soma_angle_threshold=110, axon_angle_winning_buffer=15, axon_angle_winning_buffer_backup=5, soma_angle_winning_buffer_backup=5, skeletal_length_winning_buffer=30000, skeletal_length_winning_min=10000, tie_breaker_axon_attribute='soma_plus_axon_angle', best_axon=False, plot_winning_candidate=False, return_axon_angles=True, verbose=False, **kwargs)[source]
neurd.classification_utils.filter_axon_candiates_old(neuron_obj, axon_subgraph_candidates, axon_angle_threshold_relaxed=110, axon_angle_threshold=120, relaxation_percentage=0.85, relaxation_axon_length=inf, skeletal_angle_offset=10000, skeletal_angle_comparison_distance=10000, skeletal_angle_buffer=5000, axon_like_limb_branch_dict=None, min_ais_width=85, use_beginning_ais_for_width_filter=True, extra_ais_checks=False, extra_ais_width_threshold=650, extra_ais_spine_density_threshold=0.00015, extra_ais_angle_threshold=150, verbose=False, return_axon_angles=True, best_axon=False, best_axon_skeletal_legnth_ratio=20, **kwargs)[source]

Pseudocode:

For each candidate:

  1. If all Axon? (Have a more relaxed threshold for the skeleton angle)

  2. Find the starting direction, and if not downwards –> then not axon

  3. ————- Check if too thin at the start –> Not Axon (NOT GOING TO DO THIS) ————-

  4. If first branch is axon –> classify as axon

  5. Trace back to starting node and add all branches that are axon like

neurd.classification_utils.inhibitory_excitatory_classifier(neuron_obj, verbose=False, return_spine_classification=False, return_axon_angles=False, return_n_axons=False, return_n_apicals=False, return_spine_statistics=False, axon_inhibitory_angle=150, axon_inhibitory_width_threshold=inf, axon_limb_branch_dict_precomputed=None, axon_angles_precomputed=None, **kwargs)[source]
neurd.classification_utils.spine_level_classifier(neuron_obj, sparsely_spiney_threshold=0.0001, spine_density_threshold=0.0003, min_processed_skeletal_length=20000, return_spine_statistics=False, verbose=False, **kwargs)[source]

Purpose: To Calculate the spine density and use it to classify a neuron as one of the following categories based on the spine density of high interest branches

  1. no_spine

  2. sparsely_spine

  3. densely_spine

neurd.concept_network_utils module

neurd.concept_network_utils.G_weighted_from_limb(limb_obj, weight_name='weight', upstream_attribute_for_weight='skeletal_length', node_properties=None)[source]

Purpose: Convert the concept_network_directional to a graph with weighted edges being the length of the upstream edge

Pseudocode: 1) Copy the concept network directional 2) Add the edge weight property 3) Add any node properties requested

Ex: G = cnu.G_weighted_from_limb(limb_obj,

weight_name = “weight”,

upstream_attribute_for_weight = “skeletal_length”,

node_properties = [nst.width_new])

from datasci_tools import numpy_utils as nu nu.turn_off_scientific_notation() xu.get_node_attributes(G,”width_new”,24) xu.get_edges_with_weights(G)

neurd.concept_network_utils.all_downstream_branches_from_branches(limb_obj, branches, include_original_branches=False, verbose=False)[source]
neurd.concept_network_utils.all_downstream_nodes(limb_obj, branch_idx)[source]
neurd.concept_network_utils.all_downtream_branches(limb_obj, branch_idx)[source]
neurd.concept_network_utils.all_downtream_branches_including_branch(limb_obj, branch_idx)[source]
neurd.concept_network_utils.all_upstream_branches(limb_obj, branch_idx)[source]
neurd.concept_network_utils.all_upstream_branches_from_branches(limb_obj, branches, include_original_branches=False, verbose=False)[source]
neurd.concept_network_utils.all_upstream_branches_including_branch(limb_obj, branch_idx)[source]
neurd.concept_network_utils.attribute_upstream_downstream(limb_obj, branch_idx, direction, attribute_name=None, attribute_func=None, concat_func=<function concatenate>, distance=inf, include_branch_in_dist=True, only_non_branching=True, include_branch_idx=True, verbose=False, nodes_to_exclude=None, return_nodes=False)[source]

Purpose: To retrieve and concatenate the attributes of a branch and all of the branches downsream of the branch until there is a branching point or within a certain distance

Pseudocode: 1) Get all of the branches that are downstream (either up to branch point or within certain distance) 2) Get the attributes of the branch and all those downstream 3) concatenate the attributes using the prescribed function

neurd.concept_network_utils.branches_with_parent_branching(limb_obj)[source]
neurd.concept_network_utils.branches_with_parent_non_branching(limb_obj)[source]

Purpose: To see if a branch had a parent node that branched off into multiple branches

neurd.concept_network_utils.branches_within_distance(limb_obj, branch_idx, dist_func, distance_threshold, include_branch_idx=False)[source]

Purpose: To find all branches with a certain downstream distance

neurd.concept_network_utils.branches_within_distance_downstream(limb_obj, branch_idx, distance_threshold, include_branch_idx=False)[source]

Ex: nst.branches_within_distance_downstream(limb_obj,223,

neurd.concept_network_utils.branches_within_distance_upstream(limb_obj, branch_idx, distance_threshold, include_branch_idx=False)[source]
neurd.concept_network_utils.distance_between_nodes_di(limb_obj, start_idx, destination_idx, reverse_di_graph)[source]

Purpose: To determine the distance between two nodes along a path

neurd.concept_network_utils.distance_between_nodes_di_downstream(limb_obj, start_idx, destination_idx)[source]

Purpose: To determine the upstream distance from branch_idx to start_idx

neurd.concept_network_utils.distance_between_nodes_di_upstream(limb_obj, start_idx, destination_idx)[source]

Purpose: To determine the upstream distance from branch_idx to start_idx

neurd.concept_network_utils.downstream_nodes(limb_obj, branch_idx)[source]

Will give the downstream nodes excluding the nodes to be excluded

neurd.concept_network_utils.downstream_nodes_mesh_connected(limb_obj, branch_idx, n_points_of_contact=None, downstream_branches=None, verbose=False)[source]

Purpose: will determine if at least N number of points of contact between the upstream and downstream meshes

Ex: nst.downstream_nodes_mesh_connected(limb_obj,147,

verbose=True)

neurd.concept_network_utils.downstream_nodes_without_branching(limb_obj, branch_idx, nodes_to_exclude=None)[source]

Purpose: To return all nodes that are downstream of a branch but not after a branching point

neurd.concept_network_utils.endnode_branches_of_branches_within_distance_downtream(limb_obj, branch_idx, skip_distance=2000, return_skipped_branches=False, **kwargs)[source]

Purpose: To get the branches that are a certain distance away from a branch but to only return the furthermost branches

Ex: limb_obj = neuron_obj[0] branch_idx = 223

cnu.endnode_branches_of_branches_within_distance_downtream(limb_obj,

branch_idx, 0)

neurd.concept_network_utils.feature_over_branches(limb_obj, branches, direction=None, include_original_branches_in_direction=False, feature_name=None, feature_function=None, combining_function=None, return_skeletal_length=False, verbose=False, **kwargs)[source]

Purpose: To find the average value over a list of branches

Pseudocode: 1) convert the branches list into the branches that will be used to compute the statistic 2) Compute the skeletal length for all the branches 3) Compute the statistic for all the nodes

Ex: feature_over_branches(limb_obj = n_obj_2[6],

branches = [24,2],

direction=”upstream”, verbose = True, feature_function=ns.width

)

neurd.concept_network_utils.nodes_downstream(limb_obj, branch_idx, distance=inf, include_branch_in_dist=False, only_non_branching=False, include_branch_idx=False, verbose=False, nodes_to_exclude=None, nodes_to_include=None)[source]
neurd.concept_network_utils.nodes_upstream(limb_obj, branch_idx, distance=inf, include_branch_in_dist=False, only_non_branching=False, include_branch_idx=False, verbose=False, nodes_to_exclude=None, nodes_to_include=None)[source]
neurd.concept_network_utils.nodes_upstream_downstream(limb_obj, branch_idx, direction, distance=inf, include_branch_in_dist=True, only_non_branching=True, include_branch_idx=True, verbose=False, nodes_to_exclude=None, nodes_to_include=None)[source]

Will return nodes that are upstream or downstream by a certain dist

neurd.concept_network_utils.other_direction(direction)[source]
neurd.concept_network_utils.skeletal_length_downstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, return_nodes=False, nodes_to_exclude=None, **kwargs)[source]
neurd.concept_network_utils.skeletal_length_upstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, return_nodes=False, nodes_to_exclude=None, **kwargs)[source]
neurd.concept_network_utils.skeletal_length_upstream_downstream(limb_obj, branch_idx, direction, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, return_nodes=False, nodes_to_exclude=None, **kwargs)[source]

Purpose: To find the up and downstream width

Pseudocode: 1) Get all up/down sk lengths 2) Get all up/down widths 3) Filter away non-zeros widths if argument set 4) If arrays are non-empty, computed the weighted average

neurd.concept_network_utils.skeleton_downstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_idx=True, include_branch_in_dist=True, plot_skeleton=False, verbose=False, **kwargs)[source]
neurd.concept_network_utils.skeleton_downstream_restricted(limb_obj, branch_idx, downstream_skeletal_length, downstream_nodes=None, nodes_to_exclude=None, plot_downstream_skeleton=False, plot_restricted_skeleton=False, verbose=False)[source]

Purpose: To get restricted downstream skeleton starting from the upstream node and going a certain distance

Application: will help select a part of skeleton that we want to find the width around (for axon identification purposes)

Psuedocode: 1) Get the downstream skeleton 2) Get the upstream coordinate and restrict the skeleton to a certain distance away from the upstream coordinate 3) Calculate the new width based on the skeleton and the meshes

neurd.concept_network_utils.skeleton_upstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_idx=True, plot_skeleton=False, verbose=False, **kwargs)[source]
neurd.concept_network_utils.skeleton_upstream_downstream(limb_obj, branch_idx, direction, distance=inf, only_non_branching=True, include_branch_idx=True, include_branch_in_dist=True, plot_skeleton=False, verbose=False, **kwargs)[source]

Purpose: To get the downstream skeleton of a branch

Ex: skel = downstream_skeleton(limb_obj,

96,

only_non_branching_downstream = False, downstream_distance = 30000

)

neurd.concept_network_utils.subgraph_around_branch(limb_obj, branch_idx, upstream_distance=0, downstream_distance=0, distance=None, distance_attribute='skeletal_length', include_branch_in_upstream_dist=True, include_branch_in_downstream_dist=True, only_non_branching_downstream=True, only_non_branching_upstream=False, include_branch_idx=True, return_branch_idxs=True, plot_subgraph=False, nodes_to_exclude=None, nodes_to_include=None, verbose=False)[source]

Purpose: To return a subgraph around a certain branch to find all the nodes upstream and/or downstream

Pseudocode: 1) Find all the branches upstream of branch (subtract the skeletal length of branch if included in upstream dist ) 2) Find all the branches downstrem of branch (subtract the skeletal length of branch if included in upstream dist ) 3) Find the upstream and downstream nodes a certain distance away

Ex: cnu.subgraph_around_branch(limb_obj,

branch_idx=97, upstream_distance=1000000, downstream_distance=1000000, distance = None, distance_attribute = “skeletal_length”,

include_branch_in_upstream_dist=True, include_branch_in_downstream_dist=True, only_non_branching_downstream=True,

include_branch_idx = False,

return_branch_idxs=True,

plot_subgraph = True, verbose = False

)

neurd.concept_network_utils.sum_feature_over_branches(limb_obj, branches, direction=None, include_original_branches_in_direction=False, feature_name=None, feature_function=None, combining_function=None, default_value=0, verbose=False, **kwargs)[source]

Purpose: To find the average value over a list of branches

Pseudocode: 1) Find features over branches with skeletal length 4) Do a weighted average based on skeletal length

Ex: cnu.sum_feature_over_branches(limb_obj = n_obj_2[6],

branches = [24,2],

direction=”upstream”, verbose = True, feature_function=ns.width

)

neurd.concept_network_utils.synapse_density_downstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, synapse_density_type='synapse_density', nodes_to_exclude=None, **kwargs)[source]
neurd.concept_network_utils.synapse_density_upstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, synapse_density_type='synapse_density', nodes_to_exclude=None, filter_away_zero_widths=True, **kwargs)[source]

cnu.width_downstream(limb_obj, branch_idx = 65, distance = np.inf, only_non_branching=False, include_branch_in_dist = True, include_branch_idx = True, verbose = False, width_func = au.axon_width, width_attribute = None, return_nodes = False, nodes_to_exclude = None, filter_away_zero_widths = True,)

neurd.concept_network_utils.synapse_density_upstream_downstream(limb_obj, branch_idx, direction, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, synapse_density_type='synapse_density', nodes_to_exclude=None, **kwargs)[source]

Purpose: To find the up and downstream width

Pseudocode: 1) Get all up/down sk lengths 2) Get all up/down widths 3) Filter away non-zeros widths if argument set 4) If arrays are non-empty, computed the weighted average

neurd.concept_network_utils.synapses_downstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, plot_synapses=False, verbose=False, synapse_type='synapses', return_nodes=False, nodes_to_exclude=None, **kwargs)[source]
neurd.concept_network_utils.synapses_upstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, plot_synapses=False, verbose=False, synapse_type='synapses', return_nodes=False, nodes_to_exclude=None, **kwargs)[source]
neurd.concept_network_utils.synapses_upstream_downstream(limb_obj, branch_idx, direction, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, plot_synapses=False, verbose=False, synapse_type='synapses', return_nodes=False, nodes_to_exclude=None, **kwargs)[source]

Purpose: To get the downstream synapses at a branch

Ex: syns = downstream_synapses(limb_obj,16,downstream_distance = 0, include_branch_in_downstream_dist = False,

only_non_branching_downstream=False,

plot_synapses=True)

E

neurd.concept_network_utils.upstream_branches_in_branches_list(limb_obj, branches)[source]

Purpose: To return branch idxs where other branch idxs are in the downstream nodes of a current branch idx

Pseudocode: For each branch idx 1) Get all the downstream nodes 2) Add it to the upstream list if intersect exists

neurd.concept_network_utils.upstream_nodes_without_branching(limb_obj, branch_idx, nodes_to_exclude=None)[source]

Purpose: To return all nodes that are downstream of a branch but not after a branching point

neurd.concept_network_utils.weighted_attribute_upstream_downstream(limb_obj, branch_idx, direction, attribute_name, attribute_func=None, verbose=False, filter_away_zero_sk_lengths=True, **kwargs)[source]
neurd.concept_network_utils.weighted_feature_over_branches(limb_obj, branches, direction=None, include_original_branches_in_direction=False, feature_name=None, feature_function=None, combining_function=None, default_value=0, verbose=False, **kwargs)[source]

Purpose: To find the average value over a list of branches

Pseudocode: 1) Find features over branches with skeletal length 4) Do a weighted average based on skeletal length

Ex: weighted_feature_over_branches(limb_obj = n_obj_2[6],

branches = [24,2],

direction=”upstream”, verbose = True, feature_function=ns.width

)

neurd.concept_network_utils.width_downstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, width_func=None, width_attribute=None, nodes_to_exclude=None, **kwargs)[source]
neurd.concept_network_utils.width_downstream_restricted(limb_obj, branch_idx, downstream_skeletal_length, downstream_nodes=None, plot_restricted_skeleton=False, remove_spines_from_mesh=True, verbose=False, **kwargs)[source]

Purpose: To find the width around a skeleton starting from a certain branch and uptream coordinate

Ex:

from neurd import concept_network_utils as cnu

cnu.width_downstream_restricted( limb_obj = neuron_obj_exc_syn_sp[0], branch_idx = 21, downstream_skeletal_length = 30_000, downstream_nodes = [21,26,30,35], nodes_to_exclude=None, plot_restricted_skeleton = True, remove_spines_from_mesh = True, verbose = True)

neurd.concept_network_utils.width_upstream(limb_obj, branch_idx, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, width_func=None, width_attribute=None, nodes_to_exclude=None, **kwargs)[source]

cnu.width_downstream(limb_obj, branch_idx = 65, distance = np.inf, only_non_branching=False, include_branch_in_dist = True, include_branch_idx = True, verbose = False, width_func = au.axon_width, width_attribute = None, return_nodes = False, nodes_to_exclude = None,)

neurd.concept_network_utils.width_upstream_downstream(limb_obj, branch_idx, direction, distance=inf, only_non_branching=True, include_branch_in_dist=True, include_branch_idx=True, verbose=False, width_func=None, width_attribute=None, nodes_to_exclude=None, **kwargs)[source]

neurd.connectome_analysis_utils module

neurd.connectome_analysis_utils.cell_count(df, synapse_type, return_str=True)[source]
neurd.connectome_analysis_utils.plot_cell_type_edge_stat(edge_df, cell_type_feature='cell_type', presyn_cell_type_feature=None, postsyn_cell_type_feature=None, add_presyn_postsyn_to_name=True, verbose=True, stat_to_plot='postsyn_skeletal_distance_to_soma', density=True, filter_away_0=False, maximum_percentile=98, alpha=0.3, bins=100, figsize=None, axes_height=3, axes_width=8, title_suffix='')[source]
neurd.connectome_analysis_utils.plot_histogram_discrete_labels(edge_df, restrictions_dicts=None, compartment_labels=None, cell_type_fine_labels=None, synapse_type='postsyn', histogram_attribute='presyn_skeletal_distance_to_soma', twin_color='blue', normalize=True, cumulative=True, verbose=True, labels=None, y_label=None, x_label=None, title=None, add_cell_counts_to_title=True, fontsize_title=None, figsize=(8, 5), fontsize_axes=16, fontsize_tick=20, nbins=100, legend=False, **kwargs)[source]
neurd.connectome_analysis_utils.restrict_edge_df_by_types_compartment(edge_df, synapse_type='postsyn', cell_type=None, cell_type_attribute='gnn_cell_type', cell_type_fine=None, cell_type_fine_attribute='gnn_cell_type_fine', compartment=None, restriction_dict=None, presyn_skeletal_distance_to_soma_max=None, presyn_skeletal_distance_to_soma_min=None, postsyn_skeletal_distance_to_soma_max=None, postsyn_skeletal_distance_to_soma_min=None, presyn_soma_euclid_dist_max=None, presyn_soma_euclid_dist_min=None, postsyn_soma_euclid_dist_max=None, postsyn_soma_euclid_dist_min=None, verbose=True, return_name=False, add_number_of_cells_to_name=True)[source]

neurd.connectome_query_utils module

To help query the graph object and do visualizations

neurd.connectome_query_utils.excitatory_cells_node_df(G=None, node_df=None, **kwargs)[source]
neurd.connectome_query_utils.inhibitory_cells_node_df(G=None, node_df=None, **kwargs)[source]
neurd.connectome_query_utils.n_excitatory_n_inhibitory_nodes(G=None, node_df=None, verbose=False)[source]
neurd.connectome_query_utils.node_df_from_attribute_value(attribute_type=None, attribute_value=None, query=None, G=None, node_df=None, **kwargs)[source]
neurd.connectome_query_utils.node_df_from_query(query, G=None, node_df=None, verbose=False, **kwargs)[source]

Purpose: Will return the number of

neurd.connectome_query_utils.soma_centers_from_node_df(node_df)[source]
neurd.connectome_query_utils.soma_centers_from_node_query(query, G=None, node_df=None, verbose=False, return_query_df=False)[source]

Purpose: To query the nodes of the graph and return the soma centers

Pseudocode: 1) apply query to the node df 2) export the soma centers of the query 3) return the queried table if requested

Ex: conq.soma_centers_from_node_query( query = “cell_type == ‘inhibitory’”, #G = G, node_df = node_df, verbose = True, return_query_df = False,

)

neurd.connectome_utils module

Purpose: To provide helpful functions for analyzing the microns grpah

neurd.connectome_utils.add_axes_subset_soma_to_syn_euclidean_dist_to_edge_df(edge_df, syn_type=('presyn', 'postsyn'), axes='xz')[source]

Purpose: To add the distance measure from synapse to presyn or postsyn

neurd.connectome_utils.add_compartment_syn_flag_columns_to_edge_df(df, return_columns=False)[source]
neurd.connectome_utils.add_delta_ori_edge_features(edge_df)[source]
neurd.connectome_utils.add_presyn_postsyn_syn_dist_signed_to_edge_df(df, centroid_name='centroid')[source]
neurd.connectome_utils.add_spine_syn_flag_columns_to_edge_df(df, return_columns=False)[source]
neurd.connectome_utils.add_synapse_xyz_to_edge_df(edge_df, node_df=None, G=None)[source]

Purpose: To add on the synapses centers onto an edge df

neurd.connectome_utils.attribute_from_edge_dict(edge_dict, attribute)[source]
neurd.connectome_utils.basic_connectivity_axon_dendrite_stats_from_G(G, G_lite=None, verbose=True, verbose_time=True, n_samples=300, n_samples_exc=300, graph_functions_kwargs=None, graph_functions_G=None, graph_functions=None)[source]

Psueodocode: Want to compute statistics on the graph for the excitatory and inhibitory

Pseudocode: 1) Get the excitatory and inhibitory nodes 2)

neurd.connectome_utils.compute_edge_statistic(G, edge_func, verbose=False, verbose_loop=False)[source]
neurd.connectome_utils.compute_presyn_postsyn_walk_euclidean_skeletal_dist(G, verbose=False, verbose_loop=False)[source]
neurd.connectome_utils.compute_presyn_soma_postsyn_soma_euclid_dist_axis(G, verbose=False, verbose_loop=False)[source]
neurd.connectome_utils.computed_presyn_postsyn_soma_relative_synapse_coordinate(G, verbose=False, verbose_loop=False)[source]
neurd.connectome_utils.exc_to_exc_edge_df(G, min_skeletal_length=100000, verbose=False, filter_presyns_with_soma_postsyn=True, keep_manual_proofread_nodes=True, presyn_name_in_G='u')[source]

Purpose: Produce a filtered edge df for excitatory to excitatory connections

neurd.connectome_utils.excitatory_nodes(G, attriubte='cell_type', verbose=False)[source]
neurd.connectome_utils.filter_away_presyns_with_soma_postsyns(edge_df, keep_manual_proofread_nodes=True, man_proofread_nodes=None)[source]
neurd.connectome_utils.inhibitory_nodes(G, attriubte='cell_type', verbose=False)[source]
neurd.connectome_utils.mean_axon_skeletal_length(G, nodes=None)[source]
neurd.connectome_utils.mean_dendrite_skeletal_length(G, nodes=None)[source]
neurd.connectome_utils.n_compartment_syn_from_edge_df(df)[source]

Purpose: to get a dataframe that maps the source,target edges to the number of compartment synapses

Application: Can be used to append to another dataframe

neurd.connectome_utils.n_spine_syn_from_edge_df(df)[source]

Purpose: to get a dataframe that maps the source,target edges to the number of compartment synapses

Application: Can be used to append to another dataframe

neurd.connectome_utils.neuroglancer_df_from_edge_df(G, df, columns_at_front=('presyn_segment_id', 'presyn_gnn_cell_type_fine', 'presyn_external_layer', 'presyn_external_visual_area', 'postsyn_segment_id', 'postsyn_gnn_cell_type_fine', 'postsyn_external_layer', 'postsyn_external_visual_area', 'postsyn_spine_bouton', 'synapse_id', 'synapse_x', 'synapse_y', 'synapse_z'), neuroglancer_column='neuroglancer', verbose=False, verbose_cell_type=False, suppress_errors=True)[source]

Purpose: From a dataframe that is an edge df want to generate a spreadsheet with all of the edge features and a neuroglancer link for the connection

Psedocode: 1) Turn the dataframe into dictionaries and for each dictionary a. generate the neuroglancer link b. Add list of dictionaries

Convert list of dictionaries to dataframe

neurd.connectome_utils.plot_3D_distribution_attribute(attribute, df=None, G=None, discrete=False, density=False, n_bins_attribute=20, n_intervals=10, n_bins_intervals=20, hue='gnn_cell_type_fine', color_dict=None, verbose=False, scatter_size=0.4, axis_box_off=False)[source]

Purpose: To plot a discrete or continuous value in 3D

Pseudocode: 1) Get the soma centers 2) Get the current value for all of the nodes –> decide if this is discrete of continue

2a) If continuous: –> send to the heat map 3D

2b) If discrete: i) Generate a color list for all the unique values ii) Generate a color list for all of the points iii) Plot a legend of that for color scale iv) Plot the y coordinate of each v) Plot the 3D values of each

Ex: conu.plot_3D_distribution_attribute(

#”gnn_cell_type_fine”, “axon_soma_angle_max”, df = df_to_plot, verbose = True

)

neurd.connectome_utils.plot_3d_attribute(df, attribute, G=None, discrete=False, density=False, n_bins_attribute=20, n_intervals=10, n_bins_intervals=20, hue='gnn_cell_type_fine', color_dict=None, verbose=False, scatter_size=0.4, axis_box_off=False, plot_visual_area=False)[source]
neurd.connectome_utils.plot_cell_type_pre_post_attribute_hist(df, cell_type_pairs, attribute='delta_ori_rad', n_synthetic_control=5, n_samples=None, seed=None, bins=40, verbose=False, return_dfs=False)[source]

Purpose: To look at the histogram of attributes for different presyn-postsyn cell type pairs

neurd.connectome_utils.plot_functional_connection_from_df(df, G, idx=0, method='meshafterparty', ori_min=None, ori_max=None, pre_ori_min=None, pre_ori_max=None, post_ori_min=None, post_ori_max=None, delta_ori_min=None, delta_ori_max=None, features_to_print=['synapse_id', 'postsyn_spine_bouton', 'presyn_soma_postsyn_soma_euclid_dist_xz', 'presyn_soma_postsyn_soma_euclid_dist_y_signed', 'presyn_skeletal_distance_to_soma', 'postsyn_skeletal_distance_to_soma', 'presyn_external_layer', 'postsyn_external_layer', 'presyn_external_visual_area', 'postsyn_external_visual_area', 'presyn_gnn_cell_type_fine', 'postsyn_gnn_cell_type_fine'], functional_features_to_print=['presyn_ori_rad', 'postsyn_ori_rad', 'delta_ori_rad'], verbose=True)[source]

Purpose: Want to visualiz the connections and their delta orientation

Pseudocode: 0) Restrict to orientation range 1) Restrict to delta range 2) Use the curent idx to get the current row 3) Plot the connection

neurd.connectome_utils.plot_restricted_edge_df_delta_ori(edge_df, postsyn_compartments=None, spine_categories=None, cell_types=None, presyn_cell_types=None, postsyn_cell_types=None, layers=None, presyn_layers=None, postsyn_layers=None, functional_methods=None, proofreading_methods=None, presyn_proofreading_methods=None, postsyn_proofreading_methods=None, visual_areas=None, presyn_visual_areas=None, postsyn_visual_areas=None, plot_soma_dist=True, attribute='delta_ori_rad', interval_attribute='postsyn_skeletal_distance_to_soma', n_bins=10, plot_scatter=True, plot_histograms_over_intervals=True, verbose=True, **kwargs)[source]

Purpose: To do the delta ori analysis given certain requirements

neurd.connectome_utils.plot_soma_dist_distr(df)[source]
neurd.connectome_utils.plotting_edge_df_delta_ori(edge_df, title_suffix, plot_soma_dist=False, attribute='delta_ori_rad', interval_attribute='postsyn_skeletal_distance_to_soma', n_bins=10, plot_scatter=True, plot_histograms_over_intervals=True, verbose=True)[source]
neurd.connectome_utils.postsyn_skeletal_distance_to_soma_from_edge_dict(edge_dict)[source]
neurd.connectome_utils.pre_post_node_names_from_synapse_id(G, synapse_id, node_names=None, return_one=True)[source]

Purpose: To go from synapse ids to the presyn postsyn segments associated with them by a graph lookup

Example: conu.pre_post_node_names_from_synapse_id( G, node_names = segment_ids, synapse_id = 649684, return_one = True,

)

neurd.connectome_utils.presyn_postsyn_skeletal_path_from_synapse_id(G, synapse_id, synapse_coordinate=None, segment_ids=None, return_nm=False, plot_skeletal_paths=False, synapse_color='red', synapse_scatter_size=0.2, path_presyn_color='yellow', path_postsyn_color='blue', path_scatter_size=0.05, plot_meshes=True, mesh_presyn_color='orange', mesh_postsyn_color='aqua', verbose=False, debug_time=False, segment_length=2000, remove_soma_synanpse_nodes=True)[source]

Purpose: To develop a skeletal path coordinates between two segments ids that go through a soma

Application: Can then be sent to a plotting function

Pseudocode: 1) Get the segment ids paired with that synapse id (get synapse coordinate if not precomputed) 2) Get the proofread skeletons associated with the segment_ids 3) Get the soma center coordinates to determine paths 4) Get the closest skeleton node to the coordinate 5) Find the skeletal path in coordinates 6) Plot the skeletal paths

Ex: G[“864691136830770542_0”][“864691136881594990_0”] synapse_id = 299949435 segment_ids = [“864691136830770542_0”,”864691136881594990_0”]

conu.presyn_postsyn_skeletal_path_from_synapse_id(

G, synapse_id = synapse_id, synapse_coordinate = None, segment_ids = segment_ids, return_nm = True, verbose = True, plot_skeletal_paths=True, path_scatter_size = 0.04,

)

neurd.connectome_utils.presyn_postsyn_soma_relative_synapse_coordinate(G, segment_id_1, segment_id_2, verbose=False)[source]
neurd.connectome_utils.presyn_postsyn_walk_euclidean_skeletal_dist(G, segment_id_1, segment_id_2, verbose=False)[source]
neurd.connectome_utils.presyn_soma_postsyn_soma_euclid_dist_axis(G, segment_id_1, segment_id_2, verbose=False)[source]
neurd.connectome_utils.presyns_with_soma_postsyns(edge_df, keep_manual_proofread_nodes=True, man_proofread_nodes=None, filter_away_from_df=False)[source]
neurd.connectome_utils.radius_cell_type_sampling(df=None, center=None, radius=None, cell_type_coarse=None, cell_type_fine=None, randomize=True, verbose=False, **kwargs)[source]

Purpose: To find cells within a certain query (radius is a query)

queryable: 1) Radius (center) 2) Cell Type

Pseudocode: 1) Get the seg_split_centroid table 2) Apply the radius restriction

neurd.connectome_utils.restrict_edge_df(df, postsyn_compartments=None, spine_categories=None, cell_types=None, presyn_cell_types=None, postsyn_cell_types=None, layers=None, presyn_layers=None, postsyn_layers=None, functional_methods=None, proofreading_methods=None, presyn_proofreading_methods=None, postsyn_proofreading_methods=None, visual_areas=None, presyn_visual_areas=None, postsyn_visual_areas=None, return_title_suffix=True, title_suffix_from_non_None=False, verbose=False)[source]
neurd.connectome_utils.restrict_edge_df_by_cell_type_and_layer(df, cell_types=None, presyn_cell_types=None, postsyn_cell_types=None, layers=None, presyn_layers=None, postsyn_layers=None, **kwargs)[source]

Purpose: To plot the delta ori histogram for a certain presyn/postsyn type in a dataframe

neurd.connectome_utils.segment_id_from_seg_split_id(seg_split_id)[source]
neurd.connectome_utils.segment_ids_from_synapse_ids(G, synapse_ids, verbose=False, **kwargs)[source]

Purpose: To get all segment ids involved with one synapse

Ex: segment_ids = conu.segment_ids_from_synapse_ids(

G, synapse_ids=16551759, verbose = verbose, )

neurd.connectome_utils.set_edge_attribute_from_node_attribute(G, attribute, verbose=True)[source]
neurd.connectome_utils.set_edge_presyn_postsyn_centroid(G, verbose=True)[source]
neurd.connectome_utils.soma_center_from_segment_id(G, segment_id, return_nm=False)[source]

Purpose: To get the soma center of a segment id from the Graph

Ex: conu.soma_center_from_segment_id(G,

segment_id = “864691136388279671_0”,

return_nm = True)

neurd.connectome_utils.soma_centers_from_node_df(node_df, return_nm=True)[source]
neurd.connectome_utils.soma_centers_from_segment_ids(G, segment_ids, return_nm=False)[source]
neurd.connectome_utils.syn_coordinate_from_edge_dict(edge_dict)[source]
neurd.connectome_utils.synapse_coordinate_from_seg_split_syn_id(G, presyn_id, postsyn_id, synapse_id, return_nm=False)[source]

Will return a syanpse coordinate based on presyn,postsyn id and synapse id

Ex: conu.synapse_coordinate_from_seg_split_syn_id( G,

pre_seg, post_seg, synapse_id, return_nm = True

)

neurd.connectome_utils.synapse_ids_and_coord_from_segment_ids_edge(G, segment_id_1, segment_id_2, return_nm=False, verbose=False)[source]
neurd.connectome_utils.synapses_from_segment_id_edges(G, segment_id_edges=None, segment_ids=None, synapse_ids=None, return_synapse_coordinates=True, return_synapse_ids=False, return_nm=False, return_in_dict=False, verbose=False)[source]

Purpose: For all segment_ids get the synapses or synapse coordinates for the edges between them from the graph

Ex: seg_split_ids = [“864691136388279671_0”,

“864691135403726574_0”, “864691136194013910_0”]

conu.synapses_from_segment_id_edges(G,segment_ids = seg_split_ids,

return_nm=True)

neurd.connectome_utils.visualize_graph_connections_by_method(G, segment_ids=None, method='meshafterparty', synapse_ids=None, segment_ids_colors=None, synapse_color='red', plot_soma_centers=True, verbose=False, verbose_cell_type=True, plot_synapse_skeletal_paths=True, plot_proofread_skeleton=False, synapse_path_presyn_color='aqua', synapse_path_postsyn_color='orange', synapse_path_donwsample_factor=None, transparency=0.9, output_type='server', plot_compartments=False, plot_error_mesh=False, synapse_scatter_size=0.2, synapse_path_scatter_size=0.1, debug_time=False, plot_gnn=True, gnn_embedding_df=None)[source]

Purpose: A generic function that will prepare the visualization information for either plotting in neuroglancer or meshAfterParty

Pseudocode: 0) Determine if whether should return things in nm 1) Get the segment id colors 2) Get the synapses for all the segment pairs 3) Get the soma centers if requested 4) Get the regular int names for segment_ids (if plotting in neuroglancer)

Ex: from neurd import connectome_utils as conu conu.visualize_graph_connections_by_method(

G, [“864691136023767609_0”,”864691135617737103_0”], method = “neuroglancer”

)

neurd.dandi_utils module

DANDI: centralized place to shared datasets related to brain (usually in NWB formats) - file format supports: NWB (Neurodata Without Borders), BIDS (Brain Imaging Data Structure, for MRI, fMRI)

Features: 1) Version control 2) Follows FAIR principle

FAIR: Findable, Accessible, Interoperable, Reusable

  1. Handle massive datasets

Typical process: 1) create NWB file (using PyNWB) 2) upload the data to DANDI archive using the command-line toold dandi-cli 3) share datasets or cite in publications 4) Other labs can download the datasets using the web interface

neurd.documentation_utils module

utility and wrapper functions to help output separate documentation derived from the documentation in the code

neurd.documentation_utils.name(name: str)[source]
neurd.documentation_utils.tag(tags: str | List[str] = 'default')[source]

Adds an attribute called “tag” to function that is a list of the string or list of strings sent in the argument

Parameters:

tags (Union[str,List[str]], optional) – _description_, by default “default”

neurd.error_detection module

neurd.error_detection.attempt_width_matching_for_fork_divergence(neuron_obj, fork_div_limb_branch, width_match_threshold=10, width_match_buffer=10, verbose=False)[source]

Purpose: To see if there is a possible winner in the forking based on width matching, and if there is then remove it from the error branches

Pseudocode: 1) Divide the branches into sibling groups 2) For each sibling group:

  1. Get the upstream node and its width

  2. Get the widths of all of the sibling nodes

  3. subtract the upstream nodes with from them and take the absolute value

  4. get the minimum differences and check 2 things:

    i) less than width_match_threshold 2) less than maximum difference by width_match_buffer

e1. If yes –> then only add the argmax to the error branches e2. If no –> then add both to the error branches

neurd.error_detection.axon_fork_divergence_errors_limb_branch_dict(neuron_obj, divergence_threshold_mean=160, width_threshold=90, upstream_width_max=90, verbose=False, plot_two_downstream_thick_axon_limb_branch=False, plot_fork_div_limb_branch=False, attempt_width_matching=True, width_match_threshold=10, width_match_buffer=10)[source]

Purpose: Will create a limb branch dict of all the skinny forking errors on an axon

Pseudocode: 1) Find the axon limb of the neuron (if none then return emptty dictionary) 2) Restrict the neuron to only axon pieces, with a width below certain threshold and having one sibling 3) Run the fork divergence function 4) Return the limb branch dict highlight the errors where occured

neurd.error_detection.calculate_skip_distance(limb_obj, branch_idx, calculate_skip_distance_including_downstream=True, verbose=False)[source]
neurd.error_detection.calculate_skip_distance_poly(x=None, y=None, degree=1)[source]
neurd.error_detection.cut_kissing_graph_edges(G, limb_obj, coordinate, kiss_check_bbox_longest_side_threshold=450, offset=1500, comparison_distance=2000, only_process_partitions_with_valid_edges=True, plot_offset_skeletons=False, plot_source_sink_vertices=False, plot_cut_vertices=False, plot_cut_bbox=False, verbose=False)[source]

Purpose: To remove edges in a connectivity graph that are between nodes that come from low mesh bridging (usually due to merge errors)

Pseudocode: 1) Get the mesh intersection 2) Get the offset skeletons and the endpoints 3) Find all possible partions of the branch

neurd.error_detection.debug_branches_high_degree(neuron_obj, debug_branches=None)[source]
neurd.error_detection.debug_branches_low_degree(neuron_obj, debug_branches=[68])[source]
neurd.error_detection.dendrite_branch_restriction(neuron_obj, width_max=None, upstream_skeletal_length_min=None, plot=False, verbose=False)[source]
neurd.error_detection.double_back_axon_thick(neuron_obj, axon_width_threshold=None, axon_width_threshold_max=None, double_back_threshold=120, comparison_distance=1000, offset=0, branch_skeletal_length_min=4000, plot_starting_limb_branch=False, plot_double_back_errors=False, **kwargs)[source]

Purpose: To find all skeletal double back errors on dendrite port

neurd.error_detection.double_back_axon_thin(neuron_obj, axon_width_threshold=None, double_back_threshold=135, comparison_distance=1000, offset=0, branch_skeletal_length_min=4000, plot_starting_limb_branch=False, plot_double_back_errors=False, **kwargs)[source]

Purpose: To find all skeletal double back errors on dendrite port

neurd.error_detection.double_back_dendrite(neuron_obj, double_back_threshold=None, comparison_distance=None, offset=None, branch_skeletal_length_min=None, width_max=None, plot_starting_limb_branch=False, plot_double_back_errors=False, **kwargs)[source]

Purpose: To find all skeletal double back errors on dendrite port

neurd.error_detection.double_back_edges(limb, double_back_threshold=130, verbose=True, comparison_distance=3000, offset=0, path_to_check=None)[source]

Purpose: To get all of the edges where the skeleton doubles back on itself

Application: For error detection

neurd.error_detection.double_back_edges_path(limb, path_to_check, double_back_threshold=130, verbose=True, comparison_distance=3000, offset=0, return_all_edge_info=True, skip_nodes=[])[source]

Purpose: To get all of the edges where the skeleton doubles back on itself but only along a certain path

Application: For error detection

Example: curr_limb.set_concept_network_directional(starting_node = 2) err_edges,edges,edges_width_jump = ed.double_back_edges_path(curr_limb,

path_to_check=soma_to_soma_path )

err_edges,edges,edges_width_jump

neurd.error_detection.double_back_error_limb_branch_dict(neuron_obj, double_back_threshold=120, branch_skeletal_length_min=4000, limb_branch_dict_restriction=None, upstream_skeletal_length_min=5000, comparison_distance=3000, offset=0, plot_final_double_back=False, verbose=False, **kwargs)[source]

Purpose: To find all branches that have a skeleton that doubles back by a certain degree

Pseudocode: 0)

neurd.error_detection.double_back_threshold_axon_by_width(limb_obj=None, branch_idx=None, width=None, axon_thin_width_max=None, nodes_to_exclude=None, double_back_threshold_thin=None, double_back_threshold_thick=None)[source]

Purpose: Will compute the dobule back threshold to use based on the upstream width

neurd.error_detection.downstream_nodes_from_G(G)[source]
neurd.error_detection.error_branches_by_axons(neuron_obj, verbose=False, visualize_errors_at_end=False, min_skeletal_path_threshold=15000, sub_skeleton_length=20000, ais_angle_threshold=110, non_ais_angle_threshold=65)[source]
neurd.error_detection.error_faces_by_axons(neuron_obj, error_branches=None, verbose=False, visualize_errors_at_end=False, min_skeletal_path_threshold=15000, sub_skeleton_length=20000, ais_angle_threshold=110, non_ais_angle_threshold=65, return_axon_non_axon_faces=False)[source]

Purpose: Will return the faces that are errors after computing the branches that are errors

neurd.error_detection.high_degree_branch_errors_dendrite_limb_branch_dict(neuron_obj, skip_distance=None, plot_limb_branch_pre_filter=False, plot_limb_branch_post_filter=False, plot_limb_branch_errors=False, verbose=False, high_degree_order_verbose=False, filter_axon_spines=True, filter_short_thick_endnodes=False, debug_branches=None, width_max=None, upstream_width_max=None, offset=None, comparison_distance=None, width_diff_max=None, perform_synapse_filter=None, width_diff_perc_threshold=None, width_diff_perc_buffer=None, min_skeletal_length_endpoints=None, plot_endpoints_filtered=False, min_distance_from_soma_mesh=None, plot_soma_restr=False, use_high_degree_false_positive_filter=None, **kwargs)[source]
neurd.error_detection.high_degree_branch_errors_limb_branch_dict(neuron_obj, limb_branch_dict='axon', skip_distance=None, min_upstream_skeletal_distance=None, plot_limb_branch_pre_filter=False, plot_limb_branch_post_filter=False, plot_limb_branch_errors=False, verbose=False, high_degree_order_verbose=False, filter_axon_spines=True, axon_spines_limb_branch_dict=None, filter_short_thick_endnodes=True, debug_branches=None, **kwargs)[source]

Purpose: To resolve high degree nodes for a neuron

Pseudocode: 0) get the limb branch dict to start over 2) Find all of the high degree coordinates on the axon limb

For each high degree coordinate a. Send the coordinate to the high_degree_upstream_match b. Get the error limbs back and if non empty then add to the limb branch dict

return the limb branch dict

neurd.error_detection.high_degree_false_positive_low_sibling_filter(limb_obj, branch_idx, downstream_idx, width_min=None, sibling_skeletal_angle_max=None, verbose=False)[source]

Purpose: to not error out high degree branches that have a degree of 4 and the error branches have a very low sibling angle

Pseudocode: 1) If have 2 error branches 2) If the width is above a threshold 3) Find the skeletal angle between the two components 4) Return no errors if less than certain skeletal length

Ex: high_degree_false_positive_low_sibling_filter(

neuron_obj[2], 3, [1,2], verbose = True, width_min = 400, #sibling_skeletal_angle_max=80

)

neurd.error_detection.high_degree_upstream_match(limb_obj, branch_idx, skip_distance=None, min_upstream_skeletal_distance=None, remove_short_thick_endnodes=True, axon_spines=None, short_thick_endnodes_to_remove=None, min_degree_to_resolve=None, width_func=None, max_degree_to_resolve_absolute=None, max_degree_to_resolve=None, max_degree_to_resolve_wide=None, max_degree_to_resolve_width_threshold=None, width_max=None, upstream_width_max=None, axon_dependent=True, plot_starting_branches=False, offset=None, comparison_distance=None, plot_extracted_skeletons=False, worst_case_sk_angle_match_threshold=None, width_diff_max=None, width_diff_perc=None, perform_synapse_filter=None, synapse_density_diff_threshold=None, n_synapses_diff_threshold=None, plot_G_local_edge=False, sk_angle_match_threshold=None, sk_angle_buffer=None, width_diff_perc_threshold=None, width_diff_perc_buffer=None, plot_G_global_edge=False, plot_G_node_edge=False, kiss_check=None, kiss_check_bbox_longest_side_threshold=None, plot_final_branch_matches=False, match_method=None, use_exclusive_partner=None, use_high_degree_false_positive_filter=None, verbose=False)[source]

Purpose: To Determine if branches downstream from a certain branch should be errored out based on crossovers and high degree branching downstream

Pseudocode: Phase A: #1) Get all downstream branhes (with an optional skip distance) #2) Remove short thick endnodes from possible branches in the high degree point #3) Return if not enough branches at the intersection #4) If the branch being considered is thick enough then increase the max degree to resolve #5) Return all downstream branches as errors if number of branches at intersection is too large #6) Do not process the intersection if all the branches are thick or not all are axons (return no errors)

Phase B: #1) Compute features of a complete graph that connets all upsream and downsream edges #(slightly different computation for upstream than downstream edges)

neurd.error_detection.high_low_degree_upstream_match_preprocessing(limb_obj, branch_idx, skip_distance=None, min_upstream_skeletal_distance=None, min_distance_from_soma_for_proof=None, short_thick_endnodes_to_remove=None, axon_spines=None, min_degree_to_resolve=None, width_func=None, max_degree_to_resolve_absolute=None, max_degree_to_resolve=None, max_degree_to_resolve_wide=None, max_degree_to_resolve_width_threshold=None, width_min=None, width_max=None, upstream_width_max=None, axon_dependent=None, return_skip_info=True, verbose=False)[source]

Purpose: To take a node on a limb and determine a) if the node should even be processed (and if it shouldn’t what is the return value) b) What the downstream nodes to be processed should be c) What the skip distance and skip nodes are

What want to return: - return value - skip distance - skipped_nodes - downstream_branches

Pseudocode: 1) Calulate the skip distance

neurd.error_detection.low_degree_branch_errors_limb_branch_dict(neuron_obj, limb_branch_dict='axon', skip_distance=0, min_upstream_skeletal_distance=None, plot_limb_branch_pre_filter=False, plot_limb_branch_post_filter=False, plot_limb_branch_errors=False, verbose=False, low_degree_order_verbose=False, filter_axon_spines=True, filters_to_run=None, debug_branches=None, **kwargs)[source]

Purpose: To resolve low degree nodes for a neuron

Pseudocode: 0) get the limb branch dict to start over 2) Find all of the high degree coordinates on the axon limb

For each high degree coordinate a. Send the coordinate to the high_degree_upstream_match b. Get the error limbs back and if non empty then add to the limb branch dict

return the limb branch dict

Ex: from neurd import error_detection as ed ed.low_degree_branch_errors_limb_branch_dict(filt_neuron,

verbose = True,

low_degree_order_verbose=True, filters_to_run = [gf.axon_double_back_filter], plot_G_local_edge = True)

Ex on how to debug a certain filter on a certain branch:

neurd.error_detection.low_degree_upstream_match(limb_obj, branch_idx, skip_distance=None, min_upstream_skeletal_distance=None, remove_short_thick_endnodes=True, short_thick_endnodes_to_remove=None, axon_spines=None, min_degree_to_resolve=None, max_degree_to_resolve_wide=None, width_func=None, max_degree_to_resolve_absolute=None, max_degree_to_resolve=None, width_max=None, upstream_width_max=None, axon_dependent=True, plot_starting_branches=False, offset=None, comparison_distance=None, plot_extracted_skeletons=False, worst_case_sk_angle_match_threshold=None, width_diff_max=None, width_diff_perc=None, perform_synapse_filter=None, synapse_density_diff_threshold=None, n_synapses_diff_threshold=None, plot_G_local_edge=False, filters_to_run=None, verbose=False, **kwargs)[source]

Purpose: To Determine if branches downstream from a certain branch should be errored out based on forking rules

1) Determine if branch should even be processed if should be processed 2) Calculate the edge attributes for this local graph 3) Iterate through all of the filters filters_to_run

a. Send the limb, graph to the filter to run b.

neurd.error_detection.matched_branches_by_angle(limb_obj, branches, **kwargs)[source]
neurd.error_detection.matched_branches_by_angle_at_coordinate(limb_obj, coordinate, coordinate_branches=None, offset=1000, comparison_distance=1000, match_threshold=45, verbose=False, plot_intermediates=False, return_intermediates=False, plot_match_intermediates=False, less_than_threshold=True)[source]

Purpose: Given a list of branch indexes on a limb that all touch, find: a) the skeleton angle between them all b) apply a threshold on the angle between to only keep those below/above

Ex: from neurd import error_detection as ed ed.matched_branches_by_angle_at_coordinate(limb_obj,

coordinate, offset=1500, comparison_distance = 1000, match_threshold = 40, verbose = True, plot_intermediates = False, plot_match_intermediates = False)

neurd.error_detection.path_to_edges(path, skip_nodes=[])[source]
neurd.error_detection.resolving_crossovers(limb_obj, coordinate, match_threshold=65, verbose=False, return_new_edges=True, return_subgraph=False, plot_intermediates=False, offset=1000, comparison_distance=1000, apply_width_filter=None, best_match_width_diff_max=None, best_match_width_diff_max_perc=None, best_match_width_diff_min=None, best_singular_match=None, lowest_angle_sum_for_pairs=None, return_existing_edges=True, edges_to_avoid=None, no_non_cut_disconnected_comps=None, branches_to_disconnect=None, **kwargs)[source]

Purpose: To determine the connectivity that should be at the location of a crossover (the cuts that should be made and the new connectivity)

Pseudocode: 1) Get all the branches that correspond to the coordinate 2) For each branch - get the boundary cosine angle between the other branches - if within a threshold then add edge 3) Ge the subgraph of all these branches: - find what edges you have to cut 4) Return the cuts/subgraph

Ex: resolving_crossovers(limb_obj = copy.deepcopy(curr_limb),

coordinate = high_degree_coordinates[0],

match_threshold = 40, verbose = False,

return_new_edges = True,

return_subgraph=True, plot_intermediates=False)

neurd.error_detection.skip_distance_from_branch_width(width, max_skip=2300, skip_distance_poly=None)[source]

Purpose: To return the skip distance of of the upstream branch based on the width

Pseudocode: 1) Evaluate the skip distance polynomial at the certain branch width

neurd.error_detection.thick_t_errors_limb_branch_dict(neuron_obj, axon_only=True, parent_width_maximum=70, min_child_width_max=78, child_skeletal_threshold=7000, plot_two_downstream_thin_axon_limb_branch=False, plot_wide_angled_children=False, plot_thick_t_crossing_limb_branch=False, plot_t_error_limb_branch=False, verbose=False)[source]

Purpose: To generate a limb branch dict of the for branches where probably a thick axon has crossed a smaller axon

Application: Will then be used to filter away in proofreading

Pseudocode: 1) Find all of the thin axon branches with 2 downstream nodes 2) Filter list down to those with a) a high enough sibling angles b) high min child skeletal length c) min max child width

** those branches that pass that filter are where error occurs

For all error branches i) find the downstream nodes ii) add the downstream nodes to the error branch list

Example: ed.thick_t_errors_limb_branch_dict(filt_neuron,

plot_two_downstream_thin_axon_limb_branch = False,

plot_wide_angled_children = False, plot_thick_t_crossing_limb_branch = False, plot_t_error_limb_branch = True, verbose = True)

neurd.error_detection.upstream_node_from_G(G)[source]
neurd.error_detection.webbing_t_errors_limb_branch_dict(neuron_obj, axon_only=True, child_width_maximum=75, parent_width_maximum=75, plot_two_downstream_thin_axon_limb_branch=False, plot_wide_angled_children=False, error_if_web_is_none=True, verbose=True, web_size_threshold=120, web_size_type='ray_trace_median', web_above_threshold=True, plot_web_errors=False, child_skeletal_threshold=10000, ignore_if_child_mesh_not_touching=True)[source]

Purpose: Return all of the branches that are errors based on the rule that when the axon is small and forms a wide angle t then there should be a characteristic webbing that is wide enough (if not then it is probably just a merge error)

Pseudocode: 1) Find all of the candidate branches in the axon 2) Find all those that have a webbing t error 3) find all of the downstream nodes of that nodes and add them to a limb branch dict that gets returned

neurd.error_detection.webbing_t_errors_limb_branch_dict_old(neuron_obj, axon_only=True, child_width_maximum=75, parent_width_maximum=75, plot_two_downstream_thin_axon_limb_branch=False, plot_wide_angled_children=False, error_if_web_is_none=True, verbose=True, web_size_threshold=120, web_size_type='ray_trace_median', web_above_threshold=True, plot_web_errors=False, child_skeletal_threshold=10000, ignore_if_child_mesh_not_touching=True)[source]

Purpose: Return all of the branches that are errors based on the rule that when the axon is small and forms a wide angle t then there should be a characteristic webbing that is wide enough (if not then it is probably just a merge error)

Pseudocode: 1) Find all of the candidate branches in the axon 2) Find all those that have a webbing t error 3) find all of the downstream nodes of that nodes and add them to a limb branch dict that gets returned

neurd.error_detection.width_jump_double_back_edges_path(limb_obj, path, starting_coordinate=None, width_name='no_spine_median_mesh_center', width_name_backup='no_spine_median_mesh_center', skeletal_length_to_skip=5000, comparison_distance=4000, offset=2000, width_jump_threshold=200, width_jump_axon_like_threshold=250, running_width_jump_method=False, double_back_threshold=120, double_back_axon_like_threshold=None, perform_double_back_errors=True, perform_width_errors=True, perform_axon_width_errors=True, skip_double_back_errors_for_axon=True, allow_axon_double_back_angle_with_top=None, allow_axon_double_back_angle_with_top_width_min=110, verbose=True, return_all_edge_info=True, axon_comparison_distance=None)[source]

To get the double back and width jumps along a path of a limb (but only for those branches that are deemed significant by a long enough skeletal length)

– have options to set for both width and doubling back – option to set that will skip the doubling back if axon (or axon-like) or not – have option for axon width jump (so if want different than dendritic)

Pseducodde: 1) Get the order of coordinates on te path 2) Calculate the skeletal lengths of branches 3) Determine the branches that are too small skeletal wise (deemed insignificant) and remove from path

– IF THERE IS AT LEAST 2 BRANCHES LEFT TO TEST –

  1. Revise the ordered coordinates by deleted the indexes that are too small

  2. Compute the enw edges to test

  3. Get the pairs of endpoints for each edge

  4. Iterate through all of the edges to test
    • find if any of the branches are labeled as axon or axon-like

    1. get the skeleton and width boundary

    2. Get the width jump (and record)

    3. Get the skeleton angle (and record)

    4. Depending on the conditions set add the start node and then next node in the original path to the error edges if violates one of the rules

  5. Return the error edges and all of the skeleton angle, width jump data for the path analyzed

neurd.error_detection.width_jump_edges(limb, width_name='no_spine_median_mesh_center', width_jump_threshold=100, verbose=False, path_to_check=None)[source]

Will only look to see if the width jumps up by a width_jump_threshold threshold ammount and if it does then will save the edges according to that starting soma group

Example: ed = reload(ed) ed.width_jump_edges(neuron_obj[5],verbose=True)

neurd.error_detection.width_jump_edges_path(limb, path_to_check, width_name='no_spine_median_mesh_center', width_jump_threshold=100, verbose=False, return_all_edge_info=True, comparison_distance=3000, offset=1000, skip_nodes=[])[source]

Will only look to see if the width jumps up by a width_jump_threshold threshold ammount

but only along a certain path

Example: curr_limb.set_concept_network_directional(starting_node = 4) err_edges,edges,edges_width_jump = ed.width_jump_edges_path(curr_limb,

path_to_check=np.flip(soma_to_soma_path),

width_jump_threshold=200 )

err_edges,edges,edges_width_jump

neurd.error_detection.width_jump_from_upstream_min(limb_obj, branch_idx, skeletal_length_min=2000, verbose=False, **kwargs)[source]

Purpose: To Find the width jump up of a current branch from those proceeding it

Pseudocode: 1) Find the minimum proceeding width 2) Find the current width 3) Subtract and Return

Ex: from neurd import error_detection as ed ed.width_jump_from_upstream_min(limb_obj=neuron_obj[0], branch_idx=318, skeletal_length_min = 2000, verbose = False)

neurd.error_detection.width_jump_up_axon(neuron_obj, upstream_skeletal_length_min=None, branch_skeletal_length_min=None, upstream_skeletal_length_min_for_min=None, width_jump_max=None, axon_width_threshold_max=None, plot_width_errors=False, **kwargs)[source]

Purpose: To apply the width jump up check on the axon segments of neuron

Pseudocode: 0) Set the width parameters corectly for axon 1) Find all of the axon branches 2) Run the width jump check

neurd.error_detection.width_jump_up_dendrite(neuron_obj, upstream_skeletal_length_min=None, branch_skeletal_length_min=None, upstream_skeletal_length_min_for_min=None, width_jump_max=None, plot_width_errors=False, **kwargs)[source]

Purpose: To apply the width jump up check on the axon segments of neuron

Pseudocode: 0) Set the width parameters corectly for axon 1) Find all of the axon branches 2) Run the width jump check

neurd.error_detection.width_jump_up_error_limb_branch_dict(neuron_obj, limb_branch_dict_restriction=None, upstream_skeletal_length_min=10000, branch_skeletal_length_min=6000, upstream_skeletal_length_min_for_min=4000, width_jump_max=75, plot_final_width_jump=False, verbose=False, **kwargs)[source]

Purpose: To find all branches that hae a jump up of width from the minimum of the upsream widths (that are indicative of an error)

Pseudocode: 0) Given a starting limb branch dict 1) Query the neuron for those branhes that have a certain upstream width and have a certain skeletal width 2) Query the neuron for those with a width jump above a certain amount 3) Graph the query

neurd.functional_tuning_utils module

neurd.functional_tuning_utils.add_on_delta_to_df(df, ori_name_1='ori_rad', ori_name_2='ori_rad_post', dir_name_1='dir_rad', dir_name_2='dir_rad_post')[source]
neurd.functional_tuning_utils.cdiff(alpha, beta, period=3.141592653589793, rad=True)[source]
neurd.functional_tuning_utils.cdist(alpha, beta, period=3.141592653589793, rad=True)[source]

neurd.gnn_cell_typing_utils module

neurd.gnn_embedding_utils module

neurd.graph_filters module

neurd.graph_filters.axon_double_back_filter(G, limb_obj, branch_skeletal_length_min=None, total_downstream_skeleton_length_threshold=None, upstream_skeletal_length_min=None, axon_width_threshold_thin=None, axon_width_threshold_thick=None, attempt_upstream_pair_singular=True, verbose=False, **kwargs)[source]

Purpose: Find errors if branches double back by too much

Pseudocode: 1)

neurd.graph_filters.axon_double_back_inh_filter(G, limb_obj, branch_skeletal_length_min=None, total_downstream_skeleton_length_threshold=None, upstream_skeletal_length_min=None, axon_width_threshold_thin=None, axon_width_threshold_thick=None, attempt_upstream_pair_singular=None, verbose=False, **kwargs)[source]

Purpose: Find errors if branches double back by too much

Pseudocode: 1)

neurd.graph_filters.axon_spine_at_intersection_filter(G, limb_obj, attempt_upstream_pair_singular=True, upstream_width_threshold=None, downstream_width_threshold=None, child_skeletal_threshold_total=None, verbose=False, **kwargs)[source]

Purpose: Find error branches if there is an axon spine that is downstream of the upstrea branch

Pseudocode: 1) Get all downstsream nodes of the upstream branch 2) Find the intersection with the limb axon spines 3) if axon spine is detected

If attempt_upstream_pair_singular:

Run upstream_pair_singular and return error branches

else:

Return all downstream nodes as errors

Ex: gf.axon_spine_at_intersection_filter(G,

limb_obj = filt_neuron.axon_limb,

attempt_upstream_pair_singular = True, verbose = True,

**dict())

neurd.graph_filters.axon_webbing_filter(G, limb_obj, child_width_maximum=None, parent_width_maximum=None, child_skeletal_threshold=None, child_skeletal_threshold_total=None, child_angle_max=None, web_size_threshold=None, web_size_type='ray_trace_median', web_above_threshold=None, verbose=False, attempt_upstream_pair_singular=None, error_on_web_none=False, **kwargs)[source]

Purpose: To find the error branches from the axon webbing filter (no valid webbing if children branches are wide angle t and the parent width is low

Pseudocode: 1) Motif checking 2) Check that downstream branches are connected 3) Checking the webbing 4) If Invalid webbing return the error branches

Ex: from neurd import graph_filters as gf gf.axon_webbing_filter(G,

limb_obj, verbose = True, child_angle_max=40, child_width_maximum=90,

web_size_threshold = 300)

neurd.graph_filters.fork_divergence_filter(G, limb_obj, downstream_width_max=None, upstream_width_max=None, total_downstream_skeleton_length_threshold=None, individual_branch_length_threshold=None, divergence_threshold_mean=None, attempt_upstream_pair_singular=False, comparison_distance=None, verbose=False, **kwargs)[source]

Purpose: Find error branches if there is a forking that is too close to each other

Pseudocode: 1) Get all downstsream nodes of the upstream branch 3) if axon spine is detected

If attempt_upstream_pair_singular:

Run upstream_pair_singular and return error branches

else:

Return all downstream nodes as errors

Ex: gf.axon_spine_at_intersection_filter(G,

limb_obj = filt_neuron.axon_limb,

attempt_upstream_pair_singular = True, verbose = True,

**dict())

neurd.graph_filters.fork_min_skeletal_distance_filter(G, limb_obj, downstream_width_max=None, upstream_width_max=None, total_downstream_skeleton_length_threshold=None, individual_branch_length_threshold=None, min_distance_threshold=None, attempt_upstream_pair_singular=False, verbose=False, **kwargs)[source]

Purpose: Find error branches if there is a forking that is too close to each other

Pseudocode: 1) Get all downstsream nodes of the upstream branch 3) if axon spine is detected

If attempt_upstream_pair_singular:

Run upstream_pair_singular and return error branches

else:

Return all downstream nodes as errors

Ex: gf.axon_spine_at_intersection_filter(G,

limb_obj = filt_neuron.axon_limb,

attempt_upstream_pair_singular = True, verbose = True,

**dict())

neurd.graph_filters.graph_filter_adapter(G, limb_obj, motif, graph_filter_func, attempt_upstream_pair_singular=False, verbose=False, **kwargs)[source]

Purpose: To apply a graph filter to a specific local graph by 1) Determining if the graph filter should be run on this local graph 2) If it should be applied, run the graph filter and determine if there are any branches that should be errored out

Pseudocode: 1) Takes motif 2) Runs motif on the graph 3) Sends the graph and the limb to the function to get a true false 4) If it is yes, then maybe run the upstream pair singular

neurd.graph_filters.min_synapse_dist_to_branch_point_filter(G, limb_obj, attempt_upstream_pair_singular=True, upstream_width_threshold=None, downstream_width_threshold=None, min_synape_dist=None, verbose=False, **kwargs)[source]

Purpose: Find error branches if there is a synapse at the intersection

Pseudocode: 1) Find the min distance of a synapse to the branching point 3) if min distance of synapse is less than thresshold

If attempt_upstream_pair_singular:

Run upstream_pair_singular and return error branches

else:

Return all downstream nodes as errors

Ex: gf.min_synapse_dist_to_branch_point_filter(G,

limb_obj, verbose=True)

neurd.graph_filters.thick_t_filter(G, limb_obj, parent_width_maximum=None, min_child_width_max=None, child_skeletal_threshold=None, child_skeletal_threshold_total=None, child_angle_max=None, attempt_upstream_pair_singular=False, verbose=False, **kwargs)[source]

Purpose: To find the error branches from the axon thick t wide angle children

Example: gf.thick_t_filter(G,limb_obj,verbose = True,

parent_width_maximum = 110,

child_angle_max=150, )

neurd.graph_filters.upstream_pair_singular(limb_obj, G=None, upstream_branch=None, downstream_branches=None, plot_starting_branches=False, offset=1000, comparison_distance=2000, plot_extracted_skeletons=False, worst_case_sk_angle_match_threshold=65, width_diff_max=75, width_diff_perc=0.6, perform_synapse_filter=True, synapse_density_diff_threshold=0.00015, n_synapses_diff_threshold=6, plot_G_local_edge=False, perform_global_edge_filter=True, sk_angle_match_threshold=45, sk_angle_buffer=27, width_diff_perc_threshold=0.15, width_diff_perc_buffer=0.3, plot_G_global_edge=False, perform_node_filter=False, use_exclusive_partner=True, plot_G_node_edge=False, kiss_check=False, kiss_check_bbox_longest_side_threshold=450, plot_final_branch_matches=False, match_method='all_error_if_not_one_match', verbose=False)[source]

Purpose: To pair the upstream branch with a possible match with a downstream branch

Pseudocode: 1) Use local edge filters 2) Use global edge filters 3) Perform node filters if requested 4) If the upstream and downstream node are alone in same component then has pairing –> if not return no pairing

Ex: from datasci_tools import networkx_utils as xu import matplotlib.pyplot as plt import networkx as nx

ed.upstream_pair_singular(G = G_saved, limb_obj = filt_neuron.axon_limb, upstream_branch = 65, downstream_branches =[30,47],

)

neurd.graph_filters.wide_angle_t_motif(child_width_maximum=100000, child_width_minimum=0, parent_width_maximum=75, child_skeletal_threshold=10000, child_skeletal_threshold_total=0, child_angle_max=40)[source]

neurd.h01_volume_utils module

class neurd.h01_volume_utils.DataInterface(**kwargs)[source]

Bases: DataInterface

__init__(**kwargs)[source]
align_array(*args, **kwargs)[source]
align_mesh(*args, **kwargs)[source]
align_neuron_obj(*args, **kwargs)[source]
align_skeleton(*args, **kwargs)[source]
segment_id_to_synapse_dict(segment_id=None, synapse_filepath=None, **kwargs)[source]
unalign_neuron_obj(*args, **kwargs)[source]
neurd.h01_volume_utils.align_array(array, soma_center=None, rotation=None, align_matrix=None, verbose=False, **kwargs)[source]

Purpose: Will align a coordinate or skeleton (or any array) with the rotation matrix determined from the soam center

neurd.h01_volume_utils.align_attribute(obj, attribute_name, soma_center=None, rotation=None, align_matrix=None)[source]
neurd.h01_volume_utils.align_matrix_from_rotation(upward_vector=None, rotation=None, **kwargs)[source]
neurd.h01_volume_utils.align_matrix_from_soma_coordinate(soma_center, verbose=False, **kwargs)[source]

Purpose: To align a mesh by a soma coordinate

Ex: # rotating the mesh nviz.plot_objects(align_mesh_from_soma_coordinate(mesh,

soma_center=soma_mesh_center

))

neurd.h01_volume_utils.align_mesh(mesh, soma_center=None, rotation=None, align_matrix=None, verbose=False, **kwargs)[source]

Purpose: To align a mesh by a soma coordinate

Ex: # rotating the mesh nviz.plot_objects(align_mesh_from_soma_coordinate(mesh,

soma_center=soma_mesh_center

))

neurd.h01_volume_utils.align_mesh_from_rotation(mesh, align_mat=None, upward_vector=None, rotation=None, verbose=False, **kwargs)[source]

Need a better version of rotation

neurd.h01_volume_utils.align_neuron_obj(neuron_obj, mesh_center=None, rotation=None, align_matrix=None, in_place=False, verbose=False, plot_final_neuron=False, align_synapses=True, **kwargs)[source]

Purpose: To rotate all of the meshes and skeletons of a neuron object

Ex: neuron_obj_rot = copy.deepcopy(neuron_obj) mesh_center = neuron_obj[“S0”].mesh_center for i in range(0,10):

neuron_obj_rot = align_neuron_obj(neuron_obj_rot,

mesh_center=mesh_center, verbose =True)

nviz.visualize_neuron(

neuron_obj_rot,limb_branch_dict = “all”)

neurd.h01_volume_utils.align_neuron_obj_from_align_matrix(neuron_obj, align_matrix)[source]
neurd.h01_volume_utils.align_skeleton(array, soma_center=None, rotation=None, align_matrix=None, verbose=False, **kwargs)[source]
neurd.h01_volume_utils.aligning_matrix_3D(upward_vector=array([0.85082648, -0.52544676, 0.]), target_vector=array([0, -1, 0]), rotation=None, verbose=False)[source]

Will come up with an alignment matrix

neurd.h01_volume_utils.radius_for_rotation_from_proj_magn(magn)[source]
neurd.h01_volume_utils.rotate_mesh_from_matrix(mesh, matrix)[source]
neurd.h01_volume_utils.rotation_from_proj_error_and_radius(proj_error, radius_for_rotation, max_rotation=-30, verbose=False)[source]

Purpose: To calculate the amount of rotation necessary based on the current radius of rotation and error magnitude of the projection

neurd.h01_volume_utils.rotation_from_soma_center(soma_center, verbose=False, **kwargs)[source]

Purpose: To get the amout r tation necessary from soma center of neuron

neurd.h01_volume_utils.rotation_signed_from_middle_vector(coordinate, origin_coordinate=array([1041738.17659344, 1785911.29763922, 125032.57443884]), middle_vector=array([1808188.88892619, -654206.39541785, 0.]), zero_out_z_coord=True, verbose=False)[source]

Purpose: Determine the direction and amount of rotation needed for a neuron based on the location of the soma

Pseudocode: 1) Compute the new relative vector to starting vector 2) Find the magnitude of projection of new point onto upward middle vector non scaled 3) Use the magnitude of the projection to find the slope of the rotation function 4) Find the error distance between point and projection distance 5) Determine the amount of rotation needed based on radius and error projection magnitude 6) Determine the sign of the rotation

Ex: rotation_signed_from_middle_vector( coordinate = orienting_coords[“bottom_far_right”],

verbose = True

)

neurd.h01_volume_utils.unalign_neuron_obj(neuron_obj, align_attribute='align_matrix', verbose=False, plot_final_neuron=False, **kwargs)[source]

neurd.limb_utils module

neurd.limb_utils.all_paths_to_leaf_nodes(limb_obj, verbose=False)[source]
neurd.limb_utils.best_feature_match_in_descendents(limb, branch_idx, feature, verbose=True)[source]
neurd.limb_utils.children_skeletal_angle(limb_obj, branch_idx, nodes_idx=None, default_value=None, verbose=False, **kwargs)[source]
neurd.limb_utils.children_skeletal_angle_max(limb_obj, branch_idx, **kwargs)[source]
neurd.limb_utils.children_skeletal_angle_min(limb_obj, branch_idx, **kwargs)[source]
neurd.limb_utils.most_upstream_endpoints_of_limb_branch(neuron_obj, limb_branch_dict, verbose=False, verbose_most_upstream=False, plot=False, return_array=False, group_by_conn_comp=True, include_downstream_endpoint=True)[source]

Pseudocode:

Ex: lu.most_upstream_endpoints_of_limb_branch_conn_comp(

neuron_obj, limb_branch_dict=dict(L1=[1,2],L2=[19,16]), verbose = False, verbose_most_upstream=False, plot = False, return_array = True, )

neurd.limb_utils.most_usptream_endpoints_of_branches_on_limb(limb_obj, branches_idx, verbose=False, plot=False, scatter_size=0.5, group_by_conn_comp=True, include_downstream_endpoint=True, **kwargs)[source]

Purpose: To get all of the upstream endpoints of the connected components of a list of branches

Pseudocode: 1) Get the connected components of the branches 2) For each connected component find

  1. the most upstream branch

  2. the upstream coordinate for that branch (could be a little offset of the upstream branch to prevent overal)

neurd.limb_utils.parent_skeletal_angle(limb_obj, branch_idx, verbose=False, default_value=None, **kwargs)[source]

Purpose: to get the branching angle with parent from the skeleton vectors

Pseudocode: 1) Get parent branch 2) Get parent and child vector 3) Get the angle between the two

Ex: from neurd import limb_utils as lu lu.parent_skeletal_angle( branch_idx = 2, limb_obj = neuron_obj[1], verbose = True, )

neurd.limb_utils.relation_skeletal_angle(limb_obj, branch_idx, relation, nodes_idx=None, default_value=None, verbose=False, extrema_value=None, return_dict=True, **kwargs)[source]

Purpose: To find the sibling angles with all siblings

neurd.limb_utils.root_skeleton_vector_from_soma(neuron_obj, limb_idx, soma_name='S0', normalize=True)[source]
neurd.limb_utils.root_width(limb_obj)[source]
neurd.limb_utils.siblings_skeletal_angle(limb_obj, branch_idx, sibling_idx=None, default_value=None, verbose=False, **kwargs)[source]
neurd.limb_utils.siblings_skeletal_angle_max(limb_obj, branch_idx, **kwargs)[source]
neurd.limb_utils.siblings_skeletal_angle_min(limb_obj, branch_idx, **kwargs)[source]
neurd.limb_utils.skeletal_angles_df(neuron_obj, functions_list=(<function set_limb_functions_for_search.<locals>.make_func.<locals>.dummy_func>, <function set_limb_functions_for_search.<locals>.make_func.<locals>.dummy_func>, <function set_limb_functions_for_search.<locals>.make_func.<locals>.dummy_func>))[source]
neurd.limb_utils.width_upstream(limb_obj, branch_idx, verbose=False, min_skeletal_length=2000, skip_low_skeletal_length_upstream=True, default_value=10000000)[source]

Purpoose: To get the width of the upstream segement

Pseudocode: 1) Get the parent node 2) Get the parent width

Ex: from neurd import limb_utils as lu lu.width_upstream(neuron_obj[1],5,verbose = True)

neurd.microns_graph_query_utils module

To help query the graph object and do visualizations

neurd.microns_graph_query_utils.excitatory_cells_node_df(G=None, node_df=None, **kwargs)[source]
neurd.microns_graph_query_utils.inhibitory_cells_node_df(G=None, node_df=None, **kwargs)[source]
neurd.microns_graph_query_utils.load_edge_df(filepath=None)[source]
neurd.microns_graph_query_utils.load_node_df(filepath=None)[source]
neurd.microns_graph_query_utils.n_excitatory_n_inhibitory_nodes(G=None, node_df=None, verbose=False)[source]
neurd.microns_graph_query_utils.node_df_from_attribute_value(attribute_type=None, attribute_value=None, query=None, G=None, node_df=None, **kwargs)[source]
neurd.microns_graph_query_utils.node_df_from_query(query, G=None, node_df=None, verbose=False, **kwargs)[source]

Purpose: Will return the number of

neurd.microns_graph_query_utils.soma_centers_from_node_df(node_df)[source]
neurd.microns_graph_query_utils.soma_centers_from_node_query(query, G=None, node_df=None, verbose=False, return_query_df=False)[source]

Purpose: To query the nodes of the graph and return the soma centers

Pseudocode: 1) apply query to the node df 2) export the soma centers of the query 3) return the queried table if requested

Ex: mqu.soma_centers_from_node_query( query = “cell_type == ‘inhibitory’”, #G = G, node_df = node_df, verbose = True, return_query_df = False,

)

neurd.microns_volume_utils module

How this list was easily generated

class neurd.microns_volume_utils.DataInterface(**kwargs)[source]

Bases: DataInterface

__init__(**kwargs)[source]
align_array(*args, **kwargs)[source]
align_mesh(*args, **kwargs)[source]
align_neuron_obj(*args, **kwargs)[source]
align_skeleton(*args, **kwargs)[source]
segment_id_to_synapse_dict(segment_id=None, synapse_filepath=None, **kwargs)[source]
unalign_neuron_obj(*args, **kwargs)[source]
neurd.microns_volume_utils.EM_coordinates_to_layer(coordinates)[source]

Purpose: To convert the y value of the EM coordinate(s) to the layer in the volume it is located

neurd.microns_volume_utils.EM_coordinates_to_visual_areas(coordinates)[source]

Purpose: To use the boundary points to classify a list of points (usually representing soma centroids) in visual area classification (V1,AL,RL)

Ex: centroid_x,centroid_y,centroid_z = minnie.AutoProofreadNeurons3.fetch(“centroid_x”,”centroid_y”,”centroid_z”) soma_centers = np.vstack([centroid_x,centroid_y,centroid_z ]).T mru.EM_coordinates_to_visual_areas(soma_centers)

neurd.microns_volume_utils.add_node_attributes_to_proofread_graph(G, neuron_data_df, attributes=None, add_visual_area=True, debug=False)[source]

Pseudocode: 1) Download all of the attributes want to store in the nodes 2) Create a dictionar mapping the nuclei to a dict of attribute values 3) set the attributes of the original graph

neurd.microns_volume_utils.align_array(array, soma_center=None, verbose=False)[source]
neurd.microns_volume_utils.align_mesh(mesh, soma_center=None, verbose=False)[source]
neurd.microns_volume_utils.align_neuron_obj(neuron_obj, **kwargs)[source]
neurd.microns_volume_utils.align_skeleton(skeleton, soma_center=None, verbose=False)[source]
neurd.microns_volume_utils.coordinates_to_layer_height(coordinates, turn_negative=True)[source]
neurd.microns_volume_utils.distance_from_microns_volume_bbox_midpoint(coordinates)[source]
neurd.microns_volume_utils.em_alignment_coordinates_info(return_nm=True)[source]

Purpose: To get the center points and max points and all the labels associated

Pseudocode: 1) Get the center,max,min,min anat and max anat

neurd.microns_volume_utils.em_alignment_data_raw(return_dict=True)[source]
neurd.microns_volume_utils.em_voxels_to_nm(data)[source]
neurd.microns_volume_utils.layer_from_em_centroid_xyz(row)[source]
neurd.microns_volume_utils.microns_volume_bbox_corners(return_nm=True)[source]
neurd.microns_volume_utils.microns_volume_bbox_midpoint(return_nm=True)[source]
neurd.microns_volume_utils.neuron_soma_layer_height(neuron_obj, soma_name='S0')[source]
neurd.microns_volume_utils.nm_to_em_voxels(data)[source]
neurd.microns_volume_utils.plot_visual_area_xz_projection(region_names=['V1', 'RL', 'AL'], region_colors=['Blues', 'Greens', 'Reds'], verbose=False)[source]

Purpose: To plot the triangulation used for the regions of the visual areas

Example: import microns_utils as mru mru.plot_visual_area_xz_projection(verbose=True)

neurd.microns_volume_utils.soma_distances_from_microns_volume_bbox_midpoint(neuron_obj, return_dict=True)[source]

Purpose: To return the distances of each some from the middle of the volume

Ex: mru.soma_distances_from_microns_volume_bbox_midpoint(neuron_obj,

return_dict=False)

neurd.microns_volume_utils.unalign_neuron_obj(neuron_obj, **kwargs)[source]
neurd.microns_volume_utils.visual_area_from_em_centroid_xyz(row)[source]

neurd.motif_null_utils module

neurd.motif_utils module

To help analyze the motifs found using the dotmotif package from a connectome dataset

neurd.motif_utils.annotated_motif_df(G, motif, node_attributes=('external_layer', 'external_visual_area', 'gnn_cell_type_fine', 'gnn_cell_type_fine_prob', 'gnn_cell_type', 'skeletal_length'), edge_attributes=('postsyn_compartment',), n_samples=None, verbose=False, filter_df=True, motif_reduction=True, add_counts=True, motif_dicts=None, matches=None, additional_node_attributes=None)[source]

Purpose: To add all of the features to the motifs

Ex: from neurd import motif_utils as mfu

G = vdi.G_auto_DiGraph

mfu.annotated_motif_df(

motif = “A->B;B->A”, G = vdi.G_auto_DiGraph, n_samples = None, verbose = False

)

neurd.motif_utils.counts_df_from_motif_df(motif_df, motif_column='motif_str')[source]
neurd.motif_utils.dotmotif_str_from_G_motif(G, node_attributes=None, edge_attributes=('postsyn_compartment',), verbose=False)[source]

Purpose: To convert a graph to a string representation to be used as an identifier

Pseudocode: 1) Gather the node attributes for each of the nodes (order by identifier and order the attributes)

  1. Gather the edge attributes

Ex: mfu.set_compartment_flat(curr_G) mfu.str_from_G_motif(

curr_G, node_attributes = (“gnn_cell_type_fine”,), edge_attributes=[“postsyn_compartment_flat”,], )

Ex: dotmotif_str_from_G_motif( curr_G, node_attributes = (“gnn_cell_type_fine”,))

neurd.motif_utils.edges_from_motif_dict(motif_dict, return_dict=False, return_node_mapping=False, verbose=True)[source]

Purpose: To get a list of the edges represented by the motif

Pseudocode: 1) Get a mapping of the nodes 2) Query the dotmotif for the edge definitions 3) For each of the groups found substitute in the node name

Ex: from datasci_tools import networkx_utils as xu import networkx as nx

G = vdi.G_auto_DiGraph motif_info = motif_dicts[20000]

edges = mfu.edges_from_motif_dict(

motif_info, return_dict=False, verbose = True,)

sub_G = xu.subgraph_from_edges(G,edges) nx.draw(sub_G,with_labels = True)

motif_nodes_from_motif

neurd.motif_utils.edges_from_str(string, verbose=False, return_edge_str=False)[source]
neurd.motif_utils.filter_G_attributes(G, node_attributes=('gnn_cell_type_fine', 'cell_type', 'external_layer', 'external_visual_area', 'manual_cell_type_fine', 'identifier'), edge_attributes=('postsyn_compartment_coarse', 'postsyn_compartment_fine', 'presyn_skeletal_distance_to_soma', 'postsyn_skeletal_distance_to_soma'))[source]
neurd.motif_utils.filter_motif_df(df, node_filters=None, min_gnn_probability=None, edges_filters=None, single_edge_motif=False, cell_type_fine_exclude=None, verbose=False)[source]

Purpose: To restrict a motif with node and edge requirements

Ex: from neurd import motif_utils as mfu

G = vdi.G_auto_DiGraph

unique_df = mfu.annotated_motif_df(

motif = “A->B;B->A”, G = vdi.G_auto_DiGraph, n_samples = None, verbose = False

)

mfu.filter_motif_df(

unique_df, min_gnn_probability = 0.5, edges_filters = [

“edge_postsyn_compartment == ‘soma’”,

]

)

neurd.motif_utils.motif_G(G, motif_dict, plot=False, verbose=False, **kwargs)[source]

Purpose: To form a graph data structure representing the motif

Pseudocode: 1) Restrict the graph to a subgraph based on the motif 2) Filter the node attributes and edge attributes to only those specified

Ex: curr_G = motif_G( G, motif_info, plot = True)

neurd.motif_utils.motif_column_mapping(df, mapping)[source]

Purpose: Want to rename certain columns to different characters so everything matches

Columns want to rename are very constrained: option 1: [name]…. [name]->[name]….

Pseudocode:

neurd.motif_utils.motif_data(G, motif_dict, cell_type_kind='gnn_cell_type_fine', include_layer=True, include_visual_area=True, include_node_identifier=True, include_edges_in_name=True, include_compartment=True, edge_attributes=('presyn_soma_postsyn_soma_euclid_dist', 'presyn_soma_postsyn_soma_skeletal_dist', 'presyn_skeletal_distance_to_soma', 'presyn_soma_euclid_dist', 'postsyn_skeletal_distance_to_soma', 'postsyn_soma_euclid_dist', 'synapse_id'), node_attributes=('skeletal_length', 'external_manual_proofread', 'gnn_cell_type_fine_prob', 'gnn_cell_type'), node_attributes_additional=None, return_str=False, verbose=True)[source]

Purpose: Convert a graph into a string representation to be indexed (used as an identifier)

2 possible representations: 1) list all cell types, then all downstream compartmnets 2) List presyn_cell_type, downstream cell type, compartment

Pseudocode: 1) Get node mapping and presyns associated 2) Get all of the edges in the graph 3) Construct a list of a identifier, identifier_2, compartment 4) Make name cell type(id), cell type 2 (id2)….: id1id2(comp)….

neurd.motif_utils.motif_dicts_from_motif_from_database(motif)[source]
neurd.motif_utils.motif_nodes_from_motif(motif, only_upper=True, verbose=False, return_n_nodes=False)[source]

Purpose: Determine the number of nodes (and what their names are ) from a motif string

Pseudocode: 1) Look for all upper case letters where there is other words before or after 2) Order the pairs found 3) Can return the length of the dictionary or just the number

neurd.motif_utils.motif_segment_df_from_motifs(motifs, return_df=True, motif=None, graph_type='DiGraph')[source]

Purpose: Turn the motif results (where motif results are in the form of a dictionary A:”segment_split”,B:”segment_splits”) into a dataframe or dictionaries

and returns dictionary or dataframe we have keys like a_segment_id,a_split_index,b_segment_id….

neurd.motif_utils.n_nodes_from_motif(motif, only_upper=True, verbose=False)[source]
neurd.motif_utils.node_attributes_from_G(G, features=None, features_to_ignore=None, features_order=('gnn_cell_type_fine', 'external_layer', 'external_visual_area'))[source]
neurd.motif_utils.node_attributes_strs(G, joining_str='/', node_attributes=None, verbose=False)[source]

Purpose: To get a list of strings representing the node attributes (that could then be used as a set for comparisons)

Pseudocode: 1) Get the node attributes you want to output

neurd.motif_utils.nodes_from_motif_dict(motif_dict, return_dict=True, verbose=False)[source]

Purpose: To extract the node names from the motif dict

Pseudocode: 1) get all of the keys with segment id in them 2) sort them 3) iterate and get the segment id and split index and put into dict

neurd.motif_utils.nodes_from_str(string)[source]
neurd.motif_utils.nodes_mapping_from_G(G)[source]

Purpose: Get the node mapping

neurd.motif_utils.query_with_edge_col(df, query, edge_delimiter='->')[source]

Purpose: To do an edge query that will 1) Rename the column values 2) Rename the query

so that it is valid with pandas querying

neurd.motif_utils.set_compartment_flat(G)[source]
neurd.motif_utils.str_from_G_motif(G, node_attributes=None, edge_attributes=('postsyn_compartment_flat',), verbose=False, joining_str='/', include_edges_in_name=True)[source]

Purpose: To convert a graph to a string representation to be used as an identifier

Pseudocode: 1) Gather the node attributes for each of the nodes (order by identifier and order the attributes)

  1. Gather the edge attributes

Ex: mfu.set_compartment_flat(curr_G) mfu.str_from_G_motif(

curr_G, node_attributes = (“gnn_cell_type_fine”,), edge_attributes=[“postsyn_compartment_flat”,], )

neurd.motif_utils.subgraph_from_motif_dict(G, motif_dict, verbose=False, identifier_name=None, plot=False)[source]
neurd.motif_utils.unique_motif_reduction(G, df, column='motif_str', node_attributes=None, edge_attributes=None, verbose=False, debug_time=False, relabel_columns=True)[source]

Pseudocode: 1) Create a dictionary mapping the non-redundant str to dotmotif 2) Find all unique str options 3) For each str option: a. Find one occurance of str b. conert it to a graph object

  1. Iterate through all non-reundance keys and do dot motif search
    1. if not found –> continue down list

    2. if found (make this the non-redundant name and add to dict)

  1. Use non redundant dict to create new columns

  2. Find the count of all non-redundanct and sort from greatest to least

  3. plot the first x number of motifs

neurd.motif_utils.visualize_graph_connections(G, key, verbose=True, verbose_visualize=False, restrict_to_synapse_ids=True, method='neuroglancer', **kwargs)[source]

Purpose: To visualize the motif connection from an entry in a motif dataframe

Pseudocode: 1) Turn entry into dict if not 2) Get the node names for the motif 3) Get the synapse ids 4) Plot the connections

neurd.nature_paper_plotting module

neurd.nature_paper_plotting.example_histogram_nice(spine_df)[source]
neurd.nature_paper_plotting.example_kde_plot(spine_df)[source]
neurd.nature_paper_plotting.plot_edit_labels_subset(edits_df, edit_labels, x='x', y='y', fontsize_axes=40, fontsize_ticks=25, bins=100, color=(0.00392156862745098, 0.45098039215686275, 0.6980392156862745), isotropic=False)[source]

neurd.neuron module

class neurd.neuron.Branch(skeleton, width=None, mesh=None, mesh_face_idx=None, labels=[])[source]

Bases: object

Class that will hold one continus skeleton piece that has no branching

__init__(skeleton, width=None, mesh=None, mesh_face_idx=None, labels=[])[source]
property area

dictionary mapping the index to the

property axon_compartment
calculate_endpoints()[source]
property compartment
compute_boutons_volume(max_hole_size=2000, self_itersect_faces=False)[source]
compute_spines_volume(max_hole_size=2000, self_itersect_faces=False)[source]
property endpoint_downstream
property endpoint_downstream_with_offset
property endpoint_downstream_x
property endpoint_downstream_y
property endpoint_downstream_z
property endpoint_upstream
property endpoint_upstream_with_offset
property endpoint_upstream_x
property endpoint_upstream_y
property endpoint_upstream_z
property endpoints_nodes
property mesh_center_x
property mesh_center_y
property mesh_center_z
property mesh_shaft
property mesh_shaft_idx
property mesh_volume
property min_dist_synapses_post_downstream
property min_dist_synapses_post_upstream
property min_dist_synapses_pre_downstream
property min_dist_synapses_pre_upstream
property n_boutons
property n_spines
property n_synapses
property n_synapses_head
property n_synapses_neck
property n_synapses_no_head
property n_synapses_post
property n_synapses_pre
property n_synapses_shaft
property n_synapses_spine
property n_web
order_skeleton_by_smallest_endpoint()[source]
property skeletal_coordinates_dist_upstream_to_downstream
property skeletal_coordinates_upstream_to_downstream
property skeletal_length
property skeletal_length_eligible
property skeleton_graph
property skeleton_vector_downstream

the skelelton vector near downstream coordinate where the vector is oriented in the skeletal walk direction away from the soma

property skeleton_vector_upstream

the skelelton vector near upstream coordinate where the vector is oriented in the skeletal walk direction away from the soma

property spine_density
property spine_volume_density
property spine_volume_median
property synapse_density
property synapse_density_post
property synapse_density_pre
property synapses_head
property synapses_neck
property synapses_no_head
property synapses_post
property synapses_pre
property synapses_shaft
property synapses_spine
property total_spine_volume
property width_array_skeletal_lengths_upstream_to_downstream
property width_array_upstream_to_downstream
property width_downstream
property width_overall
property width_upstream
class neurd.neuron.Limb(mesh, curr_limb_correspondence=None, concept_network_dict=None, mesh_face_idx=None, labels=None, branch_objects=None, deleted_edges=None, created_edges=None, verbose=False)[source]

Bases: object

Class that will hold one continus skeleton piece that has no branching (called a limb)

3) Limb Process: For each limb made a. Build all the branches from the

  • mesh

  • skeleton

  • width

  • branch_face_idx

  1. Pick the top concept graph (will use to store the nodes)

  2. Put the branches as “data” in the network

  3. Get all of the starting coordinates and starting edges and put as member attributes in the limb

__init__(mesh, curr_limb_correspondence=None, concept_network_dict=None, mesh_face_idx=None, labels=None, branch_objects=None, deleted_edges=None, created_edges=None, verbose=False)[source]

Allow for an initialization of a limb with another limb oconcept_network_dictbject

Parts that need to be copied over: ‘all_concept_network_data’,

‘concept_network’, ‘concept_network_directional’, ‘current_starting_coordinate’, ‘current_starting_endpoints’, ‘current_starting_node’,

‘current_starting_soma’, ‘label’, ‘mesh’, ‘mesh_center’, ‘mesh_face_idx’

property all_starting_coordinates

will generate the dictionary that is organized soma_idx –> soma_group_idx –> dict(touching_verts,endpoint)

that can be used to generate a concept network from

Type:

Purpose

property all_starting_nodes

will generate the dictionary that is organized soma_idx –> soma_group_idx –> dict(touching_verts,endpoint)

that can be used to generate a concept network from

Type:

Purpose

property area

dictionary mapping the index to the

property boutons
property boutons_volume
property branch_objects

dictionary mapping the index to the

property branches
compute_boutons_volume()[source]
compute_spines_volume()[source]
property concept_network_data_by_soma
property concept_network_data_by_starting_node
convert_concept_network_to_directional(no_cycles=True, width_source=None, print_flag=False, suppress_disconnected_errors=False, convert_concept_network_to_directional_verbose=False)[source]

Example on how it was developed:

from datasci_tools import numpy_dep as np from datasci_tools import networkx_utils as xu xu = reload(xu) import matplotlib.pyplot as plt from neurd import neuron_utils as nru

curr_limb_idx = 0 no_cycles = True curr_limb_concept_network = my_neuron.concept_network.nodes[f”L{curr_limb_idx}”][“data”].concept_network curr_neuron_mesh = my_neuron.mesh curr_limb_mesh = my_neuron.concept_network.nodes[f”L{curr_limb_idx}”][“data”].mesh nx.draw(curr_limb_concept_network,with_labels=True) plt.show()

mesh_widths = dict([(k,curr_limb_concept_network.nodes[k][“data”].width) for k in curr_limb_concept_network.nodes() ])

directional_concept_network = nru.convert_concept_network_to_directional(curr_limb_concept_network,no_cycles=True)

nx.draw(directional_concept_network,with_labels=True) plt.show()

property current_starting_soma_vertices
property divided_skeletons
find_branch_by_skeleton_coordinate(target_coordinate)[source]

Purpose: To be able to find the branch where the skeleton point resides

Pseudocode: For each branch: 1) get the skeleton 2) ravel the skeleton into a numpy array 3) searh for that coordinate: - if returns a non empty list then add to list

get_attribute_dict(attribute_name)[source]
get_branch_names(ordered=True, return_int=True)[source]
get_computed_attribute_data(attributes=['width_array', 'width_array_skeletal_lengths', 'width_new', 'spines_volume', 'boutons_volume', 'labels', 'boutons_cdfs', 'web_cdf', 'web', 'head_neck_shaft_idx', 'spines', 'boutons', 'synapses', 'spines_obj'], one_dict=True, print_flag=False)[source]
get_concept_network_data_by_soma(soma_idx=None)[source]
get_concept_network_data_by_soma_and_idx(soma_idx, soma_group_idx)[source]
get_skeleton(check_connected_component=True)[source]

Purpose: Will return the entire skeleton of all the branches stitched together

get_skeleton_soma_starting_node(soma, print_flag=False)[source]

Purpose: from the all

get_soma_by_starting_node(starting_node, print_flag=False)[source]

Purpose: from the all

get_soma_group_by_starting_node(starting_node, print_flag=False)[source]

Purpose: from the all

get_starting_branch_by_soma(soma, print_flag=False)[source]

Purpose: from the all

property limb_correspondence
property mesh_volume
property n_boutons
property n_branches

number of branches in the limb

property n_spines
property n_synapses
property n_synapses_post
property n_synapses_pre
property n_web
property network_starting_info

will generate the dictionary that is organized soma_idx –> soma_group_idx –> dict(touching_verts,endpoint)

that can be used to generate a concept network from

Type:

Purpose

property nodes_to_exclude
set_attribute_dict(attribute_name, attribute_dict, verbose=False)[source]
set_branches_endpoints_upstream_downstream_idx()[source]
set_computed_attribute_data(computed_attribute_data, print_flag=False)[source]
set_concept_network_directional(starting_soma=None, soma_group_idx=0, starting_node=None, print_flag=False, suppress_disconnected_errors=False, no_cycles=True, convert_concept_network_to_directional_verbose=False, **kwargs)[source]

Pseudocode: 1) Get the current concept_network 2) Delete the current starting coordinate 3) Use the all_concept_network_data to find the starting node and coordinate for the starting soma specified 4) set the starting coordinate of that node 5) rerun the convert_concept_network_to_directional and set the output to the self attribute Using: self.concept_network_directional = self.convert_concept_network_to_directional(no_cycles = True)

Example:

from neurd import neuron_visualizations as nviz

curr_limb_obj = recovered_neuron.concept_network.nodes[“L1”][“data”] print(xu.get_starting_node(curr_limb_obj.concept_network_directional)) print(curr_limb_obj.current_starting_coordinate) print(curr_limb_obj.current_starting_node) print(curr_limb_obj.current_starting_endpoints) print(curr_limb_obj.current_starting_soma)

nviz.plot_concept_network(curr_limb_obj.concept_network_directional,

arrow_size=5, scatter_size=3)

curr_limb_obj.set_concept_network_directional(starting_soma=1,print_flag=False)

print(xu.get_starting_node(curr_limb_obj.concept_network_directional)) print(curr_limb_obj.current_starting_coordinate) print(curr_limb_obj.current_starting_node) print(curr_limb_obj.current_starting_endpoints) print(curr_limb_obj.current_starting_soma)

nviz.plot_concept_network(curr_limb_obj.concept_network_directional,

arrow_size=5, scatter_size=3)

Example 8/4: uncompressed_neuron_revised.concept_network.nodes[“L1”][“data”].set_concept_network_directional(starting_soma=0,width_source=”width”,print_flag=True)

set_concept_network_edges_from_current_starting_data(verbose=False)[source]
property skeletal_length

dictionary mapping the index to the

property skeleton

Will return the entire skeleton of all the branches stitched together

Type:

Purpose

property spines
property spines_obj
property spines_volume
property synapse_density_post
property synapse_density_pre
property synapses
property synapses_post
property synapses_pre
touching_somas()[source]

The soma identifiers that a current limb is adjacent two (useful for finding paths to cut for multi-soma or multi-touch limbs

property web
class neurd.neuron.Neuron(mesh, segment_id=None, description=None, nucleus_id=None, split_index=None, preprocessed_data=None, fill_hole_size=0, decomposition_type='meshafterparty', meshparty_adaptive_correspondence_after_creation=False, calculate_spines=True, widths_to_calculate=['no_spine_median_mesh_center'], suppress_preprocessing_print=True, computed_attribute_dict=None, somas=None, branch_skeleton_data=None, ignore_warnings=True, suppress_output=False, suppress_all_output=False, preprocessing_version=2, limb_to_branch_objects=None, glia_faces=None, nuclei_faces=None, glia_meshes=None, nuclei_meshes=None, original_mesh_idx=None, labels=[], preprocess_neuron_kwargs={}, spines_kwargs={}, pipeline_products=None)[source]

Bases: object

Neuron class docstring: Will

Purpose: An object oriented approach to housing the data about a single neuron mesh and the secondary data that can be gleamed from this. For instance - skeleton - compartment labels - soma centers - subdivided mesh into cable pieces

Pseudocode:

1) Create Neuron Object (through __init__) a. Add the small non_soma_list_meshes b. Add whole mesh c. Add soma_to_piece_connectivity as concept graph and it will be turned into a concept map

2) Creat the soma meshes a. Create soma mesh objects b. Add the soma objects as [“data”] attribute of all of the soma nodes

3) Limb Process: For each limb (use an index to iterate through limb_correspondence,current_mesh_data and limb_concept_network/lables) a. Build all the branches from the

  • mesh

  • skeleton

  • width

  • branch_face_idx

  1. Pick the top concept graph (will use to store the nodes)

  2. Put the branches as “data” in the network

  3. Get all of the starting coordinates and starting edges and put as member attributes in the limb

Example 1: How you could generate completely from mesh to help with debugging:

# from mesh_tools import trimesh_utils as tu # mesh_file_path = Path(“/notebooks/test_neurons/multi_soma_example.off”) # mesh_file_path.exists() # current_neuron_mesh = tu.load_mesh_no_processing(str(mesh_file_path.absolute()))

# # picking a random segment id # segment_id = 12345 # description = “double_soma_meshafterparty”

# # ——————— Processing the Neuron —————– # # from neurd import soma_extraction_utils as sm

# somas = sm.extract_soma_center(segment_id, # current_neuron_mesh.vertices, # current_neuron_mesh.faces)

# import time # meshparty_time = time.time() # from mesh_tools import compartment_utils as cu # cu = reload(cu)

# from mesh_tools import meshparty_skeletonize as m_sk # from neurd import preprocess_neuron as pn # pn = reload(pn) # m_sk = reload(m_sk)

# somas = somas

# nru = reload(nru) # neuron = reload(neuron) # current_neuron = neuron.Neuron( # mesh=current_neuron_mesh, # segment_id=segment_id, # description=description, # decomposition_type=”meshafterparty”, # somas = somas, # #branch_skeleton_data=branch_skeleton_data, # suppress_preprocessing_print=False, # ) # print(f”Total time for processing: {time.time() - meshparty_time}”)

# # —————– Calculating the Spines and Width ———– # # current_neuron.calculate_spines(print_flag=True) # #nviz.plot_spines(current_neuron)

# current_neuron.calculate_new_width(no_spines=False, # distance_by_mesh_center=True)

# current_neuron.calculate_new_width(no_spines=False, # distance_by_mesh_center=True, # summary_measure=”median”)

# current_neuron.calculate_new_width(no_spines=True, # distance_by_mesh_center=True, # summary_measure=”mean”)

# current_neuron.calculate_new_width(no_spines=True, # distance_by_mesh_center=True, # summary_measure=”median”)

# # —————— Saving off the Neuron ————— # # current_neuron.save_compressed_neuron(output_folder=Path(“/notebooks/test_neurons/meshafterparty_processed/”), # export_mesh=True)

__init__(mesh, segment_id=None, description=None, nucleus_id=None, split_index=None, preprocessed_data=None, fill_hole_size=0, decomposition_type='meshafterparty', meshparty_adaptive_correspondence_after_creation=False, calculate_spines=True, widths_to_calculate=['no_spine_median_mesh_center'], suppress_preprocessing_print=True, computed_attribute_dict=None, somas=None, branch_skeleton_data=None, ignore_warnings=True, suppress_output=False, suppress_all_output=False, preprocessing_version=2, limb_to_branch_objects=None, glia_faces=None, nuclei_faces=None, glia_meshes=None, nuclei_meshes=None, original_mesh_idx=None, labels=[], preprocess_neuron_kwargs={}, spines_kwargs={}, pipeline_products=None)[source]

here would be calling any super classes inits Ex: Parent.__init(self)

Class can act like a dictionary and can d

property apical_limb_branch_dict
property apical_shaft_limb_branch_dict
property apical_tuft_limb_branch_dict
property area
property area_with_somas
property axon_area
axon_classification(**kwargs)[source]
property axon_length
property axon_limb
property axon_limb_branch_dict
property axon_limb_idx
property axon_limb_name
property axon_mesh
property axon_on_dendrite_limb_branch_dict
property axon_skeleton
property axon_starting_branch
property axon_starting_coordinate
property basal_limb_branch_dict
property boutons
property boutons_volume
calculate_decomposition_products(store_in_obj=False)[source]
calculate_multi_soma_split_suggestions(store_in_obj=True, plot=True, **kwargs)[source]
calculate_new_width(**kwargs)[source]
calculate_spines_old(query='median_mesh_center > 115 and n_faces_branch>100', clusters_threshold=3, smoothness_threshold=0.12, shaft_threshold=300, cgal_path=PosixPath('cgal_temp'), print_flag=False, spine_n_face_threshold=25, filter_by_bounding_box_longest_side_length=True, side_length_threshold=5000, filter_out_border_spines=False, skeleton_endpoint_nullification=True, skeleton_endpoint_nullification_distance=2000, soma_vertex_nullification=True, border_percentage_threshold=0.3, check_spine_border_perc=0.4, calculate_spine_volume=True, filter_by_volume=True, filter_by_volume_threshold=19835293, limb_branch_dict=None)[source]
compute_boutons_volume()[source]
compute_spines_volume()[source]
property dendrite_limb_branch_dict
property dendrite_mesh
property dendrite_on_axon_limb_branch_dict
property dendrite_skeleton
property distance_errored_synapses_post
property distance_errored_synapses_pre
get_attribute_dict(attribute_name)[source]
get_branch_node_names(limb_idx)[source]
get_computed_attribute_data(attributes=['width_array', 'width_array_skeletal_lengths', 'width_new', 'spines_volume', 'boutons_volume', 'labels', 'boutons_cdfs', 'web_cdf', 'web', 'head_neck_shaft_idx', 'spines', 'boutons', 'synapses', 'spines_obj'], one_dict=True, print_flag=False)[source]
get_limb_names(return_int=False)[source]
get_limb_node_names(return_int=False)[source]
get_limbs_touching_soma(soma_idx)[source]

Purpose: To get all of the limb names contacting a certain soma

Example: current_neuron.get_limbs_touching_soma(0)

get_skeleton(check_connected_component=True)[source]
get_soma_indexes()[source]
get_soma_meshes()[source]

Gives the same output that running the soma identifier would

Retunrs: a list containing the following elements 1) list of soma meshes (N) 2) scalar value of time it took to process (dummy 0) 3) list of soma sdf values (N)

get_soma_node_names(int_label=False)[source]
get_somas()[source]

Gives the same output that running the soma identifier would

Retunrs: a list containing the following elements 1) list of soma meshes (N) 2) scalar value of time it took to process (dummy 0) 3) list of soma sdf values (N)

get_somas_touching_limbs(limb_idx, return_int=True)[source]

Purpose: To get all of the limb names contacting a certain soma

Example: current_neuron.get_limbs_touching_soma(0)

get_total_n_branches()[source]
label_limb_branch_dict(label)[source]
property limb_area
property limb_branch_dict
property limb_mesh_volume
property limbs
property max_limb_n_branches
property max_limb_skeletal_length
property max_soma_area
property max_soma_n_faces
property max_soma_volume
property median_branch_length
property merge_filter_locations

a nested dictionary datastructure storing the information on where the merge error filters were trigger. The order that the merge error filters was applied matters because the branches that triggered the filter are only those that had not triggered an early applied filter, and thus it was not already filtered away. Note: this product includes all branches that triggered the filter at this stage, regardless if they were downstream of another The datastructure is organized in the following way:

merge error filter name: –> dict
limb name:

list of 2x3 arrays that stores coordinates of the endpoints the skeleton of the branch that triggered the merge error filter (1st coordinate is upstream branch)

property merge_filter_suggestions

a dictionary data structure that stores a list for each merge error filter of merge filter suggestions that would be the minimal cuts that would eliminate all merge errors of that type . Each filter detection is a dictionary storing meta data about the suggestion, with the following as some of the keys:

  • valid points: coordinates that should belong to the existing neuronal process ( a marker of where the valid mesh is).

  • error points: coordinates that should belong to incorrect neuronal process resulting from merge errors ( a marker of where the error mesh starts)

  • coordinate: locations of split points used in the elimination of soma to soma paths

The valid and error points can be used as inputs for automatic mesh splitting algorithms in other pipelines (ex: Neuroglancer)

property mesh_errored_synapses_post
property mesh_errored_synapses_pre
property mesh_from_branches
property mesh_kdtree

A kdtree of the original mesh

property mesh_volume
property mesh_volume_with_somas
multi_soma_split_execution(verbose=False, store_in_obj=True, **kwargs)[source]
property multi_soma_touching_limbs
property n_boutons
property n_branches
property n_branches_per_limb
property n_distance_errored_synapses
property n_distance_errored_synapses_post
property n_distance_errored_synapses_pre
property n_error_limbs
property n_faces
property n_limbs
property n_mesh_errored_synapses
property n_mesh_errored_synapses_post
property n_mesh_errored_synapses_pre
property n_somas
property n_spine_eligible_branches
property n_spines
property n_synapses
property n_synapses_error
property n_synapses_post
property n_synapses_pre
property n_synapses_somas
property n_synapses_total
property n_synapses_valid
property n_vertices
neuron_stats(stats_to_ignore=None, include_skeletal_stats=False, include_centroids=False, voxel_adjustment_vector=None, cell_type_mode=False, **kwargs)[source]
neuron_stats_old(stats_to_ignore=None, include_skeletal_stats=False, include_centroids=False, voxel_adjustment_vector=None, cell_type_mode=False, **kwargs)[source]
property non_axon_like_limb_branch_on_dendrite
property oblique_limb_branch_dict
plot_limb_concept_network(limb_name='', limb_idx=-1, node_size=0.3, directional=True, append_figure=False, show_at_end=True, **kwargs)[source]
plot_soma_limb_concept_network(soma_color='red', limb_color='aqua', node_size=800, font_color='black', node_colors={}, **kwargs)[source]

Purpose: To plot the connectivity of the soma and the meshes in the neuron

How it was developed:

from datasci_tools import networkx_utils as xu xu = reload(xu) node_list = xu.get_node_list(my_neuron.concept_network) node_list_colors = [“red” if “S” in n else “blue” for n in node_list] nx.draw(my_neuron.concept_network,with_labels=True,node_color=node_list_colors,

font_color=”white”,node_size=500)

property same_soma_multi_touching_limbs
save_compressed_neuron(output_folder='./', file_name='', return_file_path=False, export_mesh=False, suppress_output=True, file_name_append=None)[source]

Will save the neuron in a compressed format:

Ex: How to save compressed neuron double_neuron_preprocessed.save_compressed_neuron(“/notebooks/test_neurons/preprocessed_neurons/meshafterparty/”,export_mesh=True,

file_name=f”{double_neuron_preprocessed.segment_id}_{double_neuron_preprocessed.description}_meshAfterParty”, return_file_path=True)

Ex: How to reload compressed neuron nru.decompress_neuron(filepath=”/notebooks/test_neurons/preprocessed_neurons/meshafterparty/12345_double_soma_meshAfterParty”,

original_mesh=’/notebooks/test_neurons/preprocessed_neurons/meshafterparty/12345_double_soma_meshAfterParty’)

save_neuron_object(filename='')[source]
set_attribute_dict(attribute_name, attribute_dict)[source]
set_computed_attribute_data(computed_attribute_data, print_flag=False)[source]
property skeletal_length
property skeletal_length_eligible
property skeleton
property skeleton_length_per_limb
property soma_area
property soma_mesh_volume
property spine_density
property spine_density_eligible
property spine_eligible_branch_lengths
property spine_volume_density
property spine_volume_density_eligible
property spine_volume_median
property spine_volume_per_branch_eligible
property spines
spines_already_computed()[source]

Pseudocode: 1) Iterate through all of limbs and branches 2) If find one instance where spines not None, return True 3) If none found, return False

property spines_obj
property spines_per_branch
property spines_per_branch_eligible
property spines_volume
su = <module 'datasci_tools.system_utils' from '/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasci_tools/system_utils.py'>
property synapse_density_post
property synapse_density_pre
property synapses
property synapses_error
property synapses_post
property synapses_pre
property synapses_somas
property synapses_total
property synapses_valid
property total_spine_volume
property valid_error_split_points_by_limb

dummy docstring

property web
property width_90_perc
property width_median
property width_no_spine_90_perc
property width_no_spine_median
class neurd.neuron.Soma(mesh, mesh_face_idx=None, sdf=None, volume_ratio=None, volume=None, synapses=None)[source]

Bases: object

Class that will hold one continus skeleton piece that has no branching

Properties that are housed:

‘mesh’, ‘mesh_center’, ‘mesh_face_idx’, ‘sdf’, ‘side_length_ratios’, ‘volume_ratio’

__init__(mesh, mesh_face_idx=None, sdf=None, volume_ratio=None, volume=None, synapses=None)[source]
property area
property compartment
property mesh_volume
property n_synapses
property n_synapses_post
property n_synapses_pre
property synapses_post
property synapses_pre
property volume

Will compute the volume of the soma

neurd.neuron.convert_soma_to_piece_connectivity_to_graph(soma_to_piece_connectivity)[source]

Pseudocode: 1) Create the edges with the new names from the soma_to_piece_connectivity 2) Create a GraphOrderedEdges from the new edges

Ex:

concept_network = convert_soma_to_piece_connectivity_to_graph(current_mesh_data[0][“soma_to_piece_connectivity”]) nx.draw(concept_network,with_labels=True)

neurd.neuron.copy_concept_network(curr_network)[source]
neurd.neuron.dc_check(current_object, attribute, default_value=None)[source]
neurd.neuron.export_mesh_labels(self)[source]
neurd.neuron.export_skeleton(self, subgraph_nodes=None)[source]

neurd.neuron_geometry_utils module

Purpose: To look at the angles and projection angles of different compartments of neurons

neurd.neuron_geometry_utils.add_xz_angles_to_df(df, compartments=('axon', 'dendrite', 'basal', 'apical'))[source]

Purpose: To append the xz degree angle for all the vectors in all of the compartments

neurd.neuron_geometry_utils.plot_compartment_vector_distribution(df, n_limbs_min=4, compartment='basal', axes=array([0, 2]), normalize=True, plot_type='angle_360', bins=100, title_suffix=None, verbose=True)[source]

Purpose: To plot the 3D vectors or the 360 angle on a 1D histogram for a certain compartment over the dataframe

Pseudocode: 1) Restrict the dataframe to only those cells with a certain number of limbs in that compartment 2) Get the compartment dataframe For each vector type a) Gets the vectors (restricts them to only certain axes of the vectors) b) Normalizes the vector (because sometimes will be less than one if restricting to less than 3 axes) c)

neurd.neuron_geometry_utils.vec_df_from_compartment(df, compartment, verbose=True, align_array=True, centroid_df=None)[source]

To generate a smaller df that contains all of the vector information (and xz angles) for a given compartment. Also aligns the vectors correctly for the given dataset if requested

Pseudocode: 1) Filters away nan rows 2) Aligns the vectors 3) computes the xz angles

neurd.neuron_graph_lite_utils module

Purpose: funtionality for converting a neuron object to a graph representation that can be converted to a 2D/3D ativation maps

Ex 1: HOw to change between ravel and index

from datasci_tools import numpy_utils as nu curr_act_map[nu.ravel_index([5,4,9],array_size)]

neurd.neuron_graph_lite_utils.G_with_attrs_from_neuron_obj(neuron_obj, verbose=False, soma_attributes=['area', 'compartment', 'mesh_center', ['mesh_center', 'endpoint_upstream'], 'n_synapses', 'n_synapses_post', 'n_synapses_pre', 'sdf', 'side_length_ratios', 'volume_ratio', ['volume', 'mesh_volume']], branch_attributes=['area', 'compartment', 'axon_compartment', 'boutons_cdfs', 'boutons_volume', 'labels', 'mesh_center', 'endpoint_upstream', 'endpoint_downstream', 'mesh_volume', 'n_boutons', 'n_spines', 'n_synapses', 'n_synapses_head', 'n_synapses_neck', 'n_synapses_no_head', 'n_synapses_post', 'n_synapses_pre', 'n_synapses_shaft', 'n_synapses_spine', 'skeletal_length', 'spine_density', 'spine_volume_density', 'spine_volume_median', 'synapse_density', 'synapse_density_post', 'synapse_density_pre', 'total_spine_volume', 'width', 'width_new', 'soma_distance_euclidean', 'soma_distance_skeletal', 'skeleton_vector_upstream', 'skeleton_vector_downstream', 'width_upstream', 'width_downstream', 'min_dist_synapses_pre_upstream', 'min_dist_synapses_post_upstream', 'min_dist_synapses_pre_downstream', 'min_dist_synapses_post_downstream'], include_branch_dynamics=True, plot_G=False, neuron_obj_attributes_dict=None, recalculate_soma_volumes=True)[source]

To convert a neuron object to a graph object with attributes stored

Pseudocode: 1) Generate the total graph 2) Assign the node attributes 3) Assign the soma attributes

neurd.neuron_graph_lite_utils.array_shape_from_radius(radius)[source]
neurd.neuron_graph_lite_utils.attr_activation_map(df, attr, array_shape, return_vector=True, soma_at_end=False, exclude_soma_node=True, return_as_df=True, fill_zeros_with_closest_value=True, axes_limits=None)[source]

To generate an activation map and to export it (as a multidimensional array or as a vector)

Ex: edge_length = radius*2 array_size = (edge_length,edge_length,edge_length)

attr = “mesh_volume” ctcu.attr_activation_map(df_idx,attr,array_shape = array_size,)

neurd.neuron_graph_lite_utils.attr_value_by_node(df, node_name, attr)[source]
neurd.neuron_graph_lite_utils.attr_value_soma(df, attr)[source]

Ex: ctcu.attr_value_soma(df_idx,”n_synapses”)

neurd.neuron_graph_lite_utils.axes_limits_coordinates(axes_limits, array_shape=None, radius=None)[source]
neurd.neuron_graph_lite_utils.axes_limits_from_df(df, all_axes_same_scale=False, neg_positive_same_scale=True, min_absolute_value=5000, global_scale=None, verbose=False)[source]
neurd.neuron_graph_lite_utils.axon_compartment_extract(axon_comp)[source]
neurd.neuron_graph_lite_utils.boutons_cdfs_extract(bouton_cdfs)[source]
neurd.neuron_graph_lite_utils.boutons_volume_extract(bouton_volumes)[source]
neurd.neuron_graph_lite_utils.closest_node_idx_to_sample_idx(df, axes_limits, array_shape, verbose=False)[source]

Purpose: To get the index of the closest node point to a coordinate in the sampling

neurd.neuron_graph_lite_utils.feature_map_df(df_idx, array_shape, features_to_output=['axon', 'dendrite', 'n_boutons', 'n_synapses_head', 'n_synapses_shaft', 'skeletal_length', 'spine_volume_density', 'synapse_density', 'width_median_mesh_center'], segment_id=12345, split_index=0, axes_limits=None, exclude_soma_node=True, fill_zeros_with_closest_value=True)[source]

Will turn a dataframe with the indices of where to map the branch objects into a dataframe with the vector unraveled

neurd.neuron_graph_lite_utils.filter_df_by_axon_dendrite(df, dendrite=True, verbose=True, plot_xyz=False, cell_type='inhibitory')[source]

Purpose: Will filter nodes that are only a maximum distance away from the soma

neurd.neuron_graph_lite_utils.filter_df_by_skeletal_length(df, min_skeletal_length=10000, dendrite=True, verbose=True, plot_xyz=False)[source]

Purpose: Will filter nodes that are only a maximum distance away from the soma

neurd.neuron_graph_lite_utils.filter_df_by_soma_distance(df, max_distance=50000, distance_type='soma_distance_skeletal', verbose=True, plot_xyz=False)[source]

Purpose: Will filter nodes that are only a maximum distance away from the soma

neurd.neuron_graph_lite_utils.filter_df_by_xyz_window(df, window={'x': [-inf, inf], 'y': [-inf, inf], 'z': [-inf, inf]}, verbose=True, plot_xyz=False)[source]

To restrict the rows to only those located at certain points:

ctcu.filter_df_by_xyz_window(df,window = 50000,plot_xyz=True)

neurd.neuron_graph_lite_utils.idx_for_col(val, col, axes_limits, nbins=20, verbose=False, no_soma_reservation=True)[source]

Purpose: To find out the adjusted idx for a mapping of a datapoint

Pseudocode: a) Figure out if positive or negative (assign -1 or 1 value) b) Get the right threshold (need axes_limits) c) Bin the value (need number of bins) d) Find the right index for the value

Ex: col = “x” verbose = True val = df.loc[20,col] nbins = 40

col = “y” ctcu.idx_for_col(df.loc[100,col],col,

axes_limits=axes_limits, verbose = True)

neurd.neuron_graph_lite_utils.idx_xyz_to_df(df, all_axes_same_scale=False, neg_positive_same_scale=True, global_scale=None, axes_limits=None, radius=10, verbose=True, plot_idx=False)[source]

Purpose: To find the index of the data point based on the relative mesh center

Pseudocode: 0) Determine the axes limits

For each x,y,z column: For each datapoint: a) Figure out if positive or negative (assign -1 or 1 value) b) Get the right threshold (need axes_limits) c) Bin the value (need number of bins) d) Find the right index for the value

neurd.neuron_graph_lite_utils.labels_extract(labels)[source]
neurd.neuron_graph_lite_utils.load_G_with_attrs(filepath)[source]
neurd.neuron_graph_lite_utils.mesh_center_xyz(center)[source]
neurd.neuron_graph_lite_utils.no_spatial_df_from_df_filtered(df)[source]
neurd.neuron_graph_lite_utils.plot_df_xyz(df, branch_size=1, soma_size=4, soma_color='blue', branch_color='red', col_suffix='', flip_y=True, **kwargs)[source]
neurd.neuron_graph_lite_utils.save_G_with_attrs(G, segment_id, split_index=0, file_append='', file_path=PosixPath('/mnt/dj-stor01/platinum/minnie65/02/graphs'), return_filepath=True)[source]

To save a Graph after processing

Ex: ctcu.save_G_with_attrs(G,segment_id=segment_id,split_index=split_index)

neurd.neuron_graph_lite_utils.soma_branch_df_split(df)[source]
neurd.neuron_graph_lite_utils.soma_center_from_df(df, col_suffix='')[source]
neurd.neuron_graph_lite_utils.stats_df_from_G(G, no_attribute_default=0, None_default=0, attr_to_skip=('side_length_ratios', 'sdf', 'mesh_center'), fix_presyns_on_dendrites=True, center_xyz_at_soma=True)[source]

Purpose: To convert the data stored in a graph into a dataframe where all columns are scalar values

Things to figure out: - Null value: 0 - How to 1 hot encode things

neurd.neuron_graph_lite_utils.symmetric_window(size=None, x=None, y=None, z=None)[source]

Purpose: To Create a dict that will act like a window:

Ex: ctcu.symmetric_window(x=100,y=200,z = 300)

neurd.neuron_graph_lite_utils.width_new_extract(width_new)[source]

neurd.neuron_pipeline_utils module

Functions that outline the pipeline of taking a split neuron all the way to the autoproofreading and compartment labeling stage

neurd.neuron_pipeline_utils.after_auto_proof_stats(neuron_obj, verbose=False, store_in_obj=True)[source]
neurd.neuron_pipeline_utils.auto_proof_stage(neuron_obj, mesh_decimated=None, calculate_after_proof_stats=True, store_in_obj=True, return_stage_products=False, verbose_outline=False, verbose_proofread=False, plot_head_neck_shaft_synapses=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_compartments=False, plot_valid_synapses=False, plot_error_synapses=False, debug_time=False, **kwargs)[source]
neurd.neuron_pipeline_utils.cell_type_ax_dendr_stage(neuron_obj, mesh_decimated, store_in_obj=True, return_stage_products=False, verbose=False, plot_initial_neuron=False, plot_floating_end_nodes_limb_branch_dict=False, plot_downstream_path_limb_branch=False, plot_after_simplification=False, filter_low_branch_cluster_dendrite=False, plot_limb_branch_filter_away_low_branch=False, plot_synapses=False, segment_id=None, synapse_filepath=None, plot_spines=False, plot_spines_and_sk_filter_for_syn=False, plot_spines_and_sk_filter_for_spine=False, inh_exc_class_to_use_for_axon='neurd', plot_aligned_neuron_with_syn_sp=False, filter_dendrite_on_axon=False, plot_initial_axon=False, plot_axon_on_dendrite=False, plot_high_fidelity_axon=False, plot_boutons_web=False, plot_axon=False)[source]

Purpose: to preprocess the split neuron object prior to autoproofreading by performing:

  1. Branch simplification (making sure all branches have at least 2 children)

  2. Filtering away large clusters of glia still present

  3. match neuron to nucleus

  4. Add synapses to neuron

  5. Divide the spines into head,neck compartments

6) Perform cell typing based on spine and synapse statistics 6b) Optional: download cell type from database which may be the cell type you choose 7) Label the axon 8) Package up all of the products/statistcs generated

neurd.neuron_searching module

Purpose: Module provides tools for helping to find the interesting branches and limbs according to the query and functions that you define

** To create a limb function ** Have it return either one singular value or a dictionary mapping the branch idx to a

neurd.neuron_searching.apply_function_to_neuron(current_neuron, current_function, function_kwargs=None, verbose=False)[source]

Purpose: To retrieve a dictionary mapping every branch on every node to a certain value as defined by the function passed

Example: curr_function = ns.width curr_function_mapping = ns.apply_function_to_neuron(recovered_neuron,curr_function)

neurd.neuron_searching.area(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.average_branch_length(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.axon_label(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.axon_merge_error_width_like_query(width_to_use)[source]
neurd.neuron_searching.axon_segment(curr_limb, limb_branch_dict=None, limb_name=None, downstream_face_threshold=5000, downstream_non_axon_percentage_threshold=0.3, max_skeletal_length_can_flip=20000, distance_for_downstream_check=40000, print_flag=False, width_match_threshold=50, width_type='no_spine_median_mesh_center', must_have_spine=True, **kwargs)[source]

Function that will go through and hopefully label all of the axon pieces on a limb

neurd.neuron_searching.axon_segment_clean_false_positives(curr_limb, limb_branch_dict, limb_name=None, width_match_threshold=50, width_type='no_spine_average_mesh_center', must_have_spine=True, interest_nodes=[], false_positive_max_skeletal_length=35000, print_flag=False, **kwargs)[source]

Purpose: To help prevent the false positives where small end dendritic segments are mistaken for axon pieces by checking if the mesh transition in width is very constant between an upstream node (that is a non-axonal piece) and the downstream node that is an axonal piece then this will change the axonal piece to a non-axonal piece label:

Idea: Can look for where width transitions are pretty constant with preceeding dendrite and axon and if very similar then keep as non-dendrite

*** only apply to those with 1 or more spines

Pseudocode: 1) given all of the axons

For each axon node: For each of the directional concept networks 1) If has an upstream node that is not an axon –> if not then continue 1b) (optional) Has to have at least one spine or continues 2) get the upstream nodes no_spine_average_mesh_center width array 2b) find the endpoints of the current node 3) Find which endpoints match from the node and the upstream node 4) get the tangent part of the no_spine_average_mesh_center width array from the endpoints matching (this is either the 2nd and 3rd from front or last depending on touching AND that it is long enough)

  1. get the tangent part of the node based on touching

  2. if the average of these is greater than upstream - 50

return an updated dictionary

neurd.neuron_searching.axon_segment_downstream_dendrites(curr_limb, limb_branch_dict, limb_name=None, downstream_face_threshold=5000, downstream_non_axon_percentage_threshold=0.5, max_skeletal_length_can_flip=20000, distance_for_downstream_check=50000, print_flag=False, limb_starting_angle_dict=None, limb_starting_angle_threshold=155, **kwargs)[source]

Purpose: To filter the aoxn-like segments (so that does not mistake dendritic branches) based on the criteria that an axon segment should not have many non-axon upstream branches

Example on how to run:

curr_limb_name = “L1” curr_limb = uncompressed_neuron.concept_network.nodes[curr_limb_name][“data”] ns = reload(ns)

return_value = ns.axon_segment(curr_limb,limb_branch_dict=limb_branch_dict,
limb_name=curr_limb_name,downstream_face_threshold=5000,

print_flag=False)

return_value

neurd.neuron_searching.axon_segments_after_checks(neuron_obj, include_ais=True, downstream_face_threshold=3000, width_match_threshold=50, plot_axon=False, **kwargs)[source]
neurd.neuron_searching.axon_width(curr_branch, name=None, branch_name=None, width_name='no_bouton_median', width_name_backup='no_spine_median_mesh_center', width_name_backup_2='median_mesh_center', **kwargs)[source]
neurd.neuron_searching.axon_width_like_query(width_to_use)[source]
neurd.neuron_searching.axon_width_like_segments_old(current_neuron, current_query=None, current_functions_list=None, include_ais=False, axon_merge_error=False, verbose=False, width_to_use=None)[source]

Will get all of

neurd.neuron_searching.children_axon_width_max(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.children_skeletal_lengths_min(curr_limb, limb_name=None, width_maximum=75, **kwargs)[source]
neurd.neuron_searching.closest_mesh_skeleton_dist(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.convert_limb_function_return_to_dict(function_return, curr_limb_concept_network)[source]

purpose: to take the returned value of a limb function and convert it to a dictionary that maps all of the nodes to a certain value - capable of handling both a dictionary and a scalar value

neurd.neuron_searching.convert_limb_function_return_to_limb_branch_dict(function_return, curr_limb_concept_network, limb_name)[source]

Purpose: returns a dictionary that maps limb to valid branches according to a function return that is True or False (only includes the branches that are true from the function_return)

Result: retursn a dictionary like dict(L1=[3,5,8,9,10])

neurd.neuron_searching.convert_neuron_to_branches_dataframe(current_neuron, limbs_to_process=None, limb_branch_dict_restriction=None)[source]

axon_segment Purpose: How to turn a concept map into a pandas table with only the limb_idx and node_idx

Example: neuron_df = convert_neuron_to_branches_dataframe(current_neuron = recovered_neuron)

neurd.neuron_searching.distance_from_soma(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.downstream_nodes_mesh_connected(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.downstream_upstream_diff_of_most_downstream_syn(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.farthest_distance_from_skeleton_to_mesh(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.flip_dendrite_to_axon(curr_limb, limb_branch_dict, limb_name=None, max_skeletal_length_can_flip_dendrite=70000, downstream_axon_percentage_threshold=0.85, distance_for_downstream_check_dendrite=50000, downstream_face_threshold_dendrite=5000, axon_spine_density_max=0.00015, axon_width_max=600, significant_dendrite_downstream_density=0.0002, significant_dendrite_downstream_length=10000, print_flag=False, **kwargs)[source]

Pseudoode: 1) Flip the axon banch list into a dendrite list 2) Iterate through all the dendrite nodes:

a. Run the following checks to exclude dendrite from being flipped: - max size - spine density - width

  1. Get all of the downstream nodes and if there are an downstream nodes:
    1. Get the # of axons and non axons downsream

    2. If no axons then skip

    3. Iterate through all the downstream nodes:

    Check for a significant spiny cell and if detect then skip

    1. get the downstream axon percentage and total numbers

    2. if pass the percentage and total number threshold –> add to the list

  1. Generate a new limb branch dict

Ex: from neurd import neuron_searching as ns curr_limb_idx = 3 curr_limb = test_neuron[curr_limb_idx] limb_name = f”L{curr_limb_idx}” try:

limb_branch_dict = {limb_name:current_axon_limb_branch_dict[limb_name]}

except:

limb_branch_dict = {limb_name:[]}

ns.flip_dendrite_to_axon(curr_limb,limb_branch_dict,limb_name)

neurd.neuron_searching.fork_divergence(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]

The winning threshold appears to be 165

neurd.neuron_searching.functions_list_from_query(query, verbose=False)[source]

Purpose: To turn a query into a list of functions

Ex: ns.functions_list_from_query(query = “(n_synapses_pre >= 1) and (synapse_pre_perc >= 0.6) and (axon_width <= 270) and (n_spines <= 10) and (n_synapses_post_spine <= 3) and (skeletal_length > 2500) and (area > 1) and (closest_mesh_skeleton_dist < 500)”,

verbose = True )

neurd.neuron_searching.generate_neuron_dataframe(current_neuron, functions_list, check_nans=True, function_kwargs={})[source]

Purpose: With a neuron and a specified set of functions generate a dataframe with the values computed

Arguments: current_neuron: Either a neuron object or the concept network of a neuron functions_list: List of functions to process the limbs and branches of the concept network check_nans : whether to check and raise an Exception if any nans in run

Application: We will then later restrict using df.eval()

Pseudocode: 1) convert the functions_list to a list 2) Create a dataframe for the neuron 3) For each function: a. get the dictionary mapping of limbs/branches to values b. apply the values to the dataframe 4) return the dataframe

Example: returned_df = ns.generate_neuron_dataframe(recovered_neuron,functions_list=[ ns.n_faces_branch, ns.width, ns.skeleton_distance_branch, ns.skeleton_distance_limb, ns.n_faces_limb, ns.merge_limbs, ns.limb_error_branches ])

returned_df[returned_df[“merge_limbs”] == True]

neurd.neuron_searching.get_run_type(f)[source]

Purpose: To decide whether a function is a limb or branch function

Pseudocode: 1) Try and get the runtype 2) Extract the name of the first parameter of the function 3a) if “branch” in then branch, elif “limb” in then limb

neurd.neuron_searching.is_apical_shaft_in_downstream_branches(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.is_axon(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.is_axon_in_downstream_branches(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.is_axon_like(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.is_branch_mesh_connected_to_neighborhood(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.labels(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.labels_restriction(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.limb_error_branches(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.map_new_limb_node_value(current_df, mapping_dict, value_name)[source]

To apply a dictionary to a neuron dataframe table

mapping_dict = dict() for x,y in zip(neuron_df[“limb”].to_numpy(),neuron_df[“node”].to_numpy()):

if x not in mapping_dict.keys():

mapping_dict[x]=dict()

mapping_dict[x][y] = np.random.randint(10)

map_new_limb_node_value(neuron_df,mapping_dict,value_name=”random_number”) neuron_df

neurd.neuron_searching.matching_label(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.mean_mesh_center(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.median_mesh_center(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.merge_limbs(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.min_synapse_dist_to_branch_point(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.n_boutons(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_boutons_above_thresholds(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_downstream_nodes(curr_limb, limb_name=None, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_searching.n_downstream_nodes_with_skip(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.n_faces_branch(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_faces_limb(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.n_siblings(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.n_small_children(curr_limb, limb_name=None, width_maximum=75, **kwargs)[source]
neurd.neuron_searching.n_spines(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_downstream(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_downstream_within_dist(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_offset_endpoint_upstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_post(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_post_downstream(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_post_downstream_within_dist(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_post_head(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_post_spine(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_pre(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_pre_downstream(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_pre_offset_endpoint_upstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_spine(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_spine_offset_endpoint_upstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.n_synapses_spine_within_distance_of_endpoint_downstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.no_spine_average_mesh_center(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.no_spine_mean_mesh_center(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.no_spine_median_mesh_center(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.no_spine_width(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.parent_angle(curr_limb, limb_name=None, comparison_distance=1000, **kwargs)[source]

Will return the angle between the current node and the parent

neurd.neuron_searching.parent_width(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.query_neuron(concept_network, query, functions_list=None, function_kwargs=None, query_variables_dict=None, return_dataframe=False, return_dataframe_before_filtering=False, return_limbs=False, return_limb_grouped_branches=True, limb_branch_dict_restriction=None, print_flag=False, limbs_to_process=None, plot_limb_branch_dict=False, check_nans=True)[source]

*** to specify “limbs_to_process” to process just put in the function kwargs

Purpose: Recieve a neuron object or concept map representing a neuron and apply the query to find the releveant limbs, branches

Possible Ouptuts: 1) filtered dataframe 2) A list of the [(limb_idx,branches)] ** default 3) A dictionary that makes limb_idx to the branches that apply (so just grouping them) 4) Just a list of the limbs

Arguments concept_network, feature_functios, #the list of str/functions that specify what metrics want computed (so can use in query) query, #df.query string that specifies how to filter for the desired branches/limbs local_dict=dict(), #if any variables in the query string whose values can be loaded into query (variables need to start with @) return_dataframe=False, #if just want the filtered dataframe return_limbs=False, #just want limbs in query returned return_limb_grouped_branches=True, #if want dictionary with keys as limbs and values as list of branches in the query print_flag=True,

Example: from os import sys sys.path.append(“../../neurd_packages/meshAfterParty/meshAfterParty/”) from importlib import reload

from datasci_tools import pandas_utils as pu import pandas as pd from pathlib import Path

compressed_neuron_path = Path(“../test_neurons/test_objects/12345_2_soma_practice_decompress”)

from neurd import neuron_utils as nru nru = reload(nru) from neurd import neuron neuron=reload(neuron)

from datasci_tools import system_utils as su

with su.suppress_stdout_stderr():
recovered_neuron = nru.decompress_neuron(filepath=compressed_neuron_path,

original_mesh=compressed_neuron_path)

recovered_neuron

ns = reload(ns) nru = reload(nru)

list_of_faces = [1038,5763,7063,11405] branch_threshold = 31000 current_query = “n_faces_branch in @list_of_faces or skeleton_distance_branch > @branch_threshold” local_dict=dict(list_of_faces=list_of_faces,branch_threshold=branch_threshold)

functions_list=[ ns.n_faces_branch, “width”, ns.skeleton_distance_branch, ns.skeleton_distance_limb, “n_faces_limb”, ns.merge_limbs, ns.limb_error_branches ]

returned_output = ns.query_neuron(recovered_neuron,
functions_list,

current_query, local_dict=local_dict, return_dataframe=False, return_limbs=False, return_limb_grouped_branches=True,

print_flag=False)

Example 2: How to use the local dictionary with a list

ns = reload(ns)

current_functions_list = [

“skeletal_distance_from_soma”, “no_spine_average_mesh_center”, “n_spines”, “n_faces_branch”,

]

function_kwargs=dict(somas=[0],print_flag=False) query=”skeletal_distance_from_soma > -1 and (limb in @limb_list)” query_variables_dict = dict(limb_list=[‘L1’,’L2’,”L3”])

limb_branch_dict_df = ns.query_neuron(uncompressed_neuron,
query=query,

function_kwargs=function_kwargs, query_variables_dict=query_variables_dict,

functions_list=current_functions_list,

return_dataframe=True)

limb_branch_dict = ns.query_neuron(uncompressed_neuron,

query=query,

functions_list=current_functions_list,

query_variables_dict=query_variables_dict, function_kwargs=function_kwargs,

return_dataframe=False)

neurd.neuron_searching.query_neuron_by_labels(neuron_obj, matching_labels=[], not_matching_labels=None, match_type='all')[source]
neurd.neuron_searching.ray_trace_perc(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.restrict_by_branch_and_upstream_skeletal_length(neuron_obj, limb_branch_dict_restriction=None, plot_initial_limb_branch_restriction=False, branch_skeletal_length_min=6000, plot_branch_skeletal_length_min=False, upstream_skeletal_length_min=10000, plot_upstream_skeletal_length_min=False, include_branch=False)[source]

Purpose: Will restrict a neuron by the skeletal length of individual branches and the amount of skeleton upstream

neurd.neuron_searching.run_limb_function(limb_func, curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]

Will run a generic limb function

class neurd.neuron_searching.run_options(run_type='Limb')[source]

Bases: object

__init__(run_type='Limb')[source]

Purpose: To add wrappers for all the functions so can operate in generating a neurons dataframe

Pseudocode: 1) Get all of the functions in the module 2) Filter the functions for only those that have limb in the first arg

For all functions 3) Send each of the functions through the wrapper 4) Set the function in module with new name

neurd.neuron_searching.sibling_angle_max(curr_limb, limb_name=None, comparison_distance=1000, **kwargs)[source]
neurd.neuron_searching.sibling_angle_min(curr_limb, limb_name=None, comparison_distance=1000, **kwargs)[source]
neurd.neuron_searching.skeletal_distance_from_soma(curr_limb, limb_name=None, somas=None, error_if_all_nodes_not_return=True, include_node_skeleton_dist=True, print_flag=False, **kwargs)[source]
neurd.neuron_searching.skeletal_distance_from_soma_excluding_node(curr_limb, limb_name=None, somas=None, error_if_all_nodes_not_return=True, print_flag=False, **kwargs)[source]
neurd.neuron_searching.skeletal_length(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.skeletal_length_downstream(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.skeleton_dist_match_ref_vector(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.skeleton_distance_branch(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.skeleton_distance_limb(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.skeleton_perc_match_ref_vector(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.soma_starting_angle(curr_limb, limb_name=None, **kwargs)[source]

will compute the angle in degrees from the vector pointing straight to the volume and the vector pointing from the middle of the soma to the starting coordinate of the limb

neurd.neuron_searching.spine_density(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.spines_per_skeletal_length(branch, limb_name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_closer_to_downstream_endpoint_than_upstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_density(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_density_offset_endpoint_upstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_density_post(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_density_post_near_endpoint_downstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_density_post_offset_endpoint_upstream(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_density_pre(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_post_perc(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_post_perc_downstream(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.synapse_pre_perc(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.synapse_pre_perc_downstream(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.test_limb(curr_limb, limb_name=None, **kwargs)[source]
neurd.neuron_searching.total_upstream_skeletal_length(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.two_children_angle(curr_limb, limb_name=None, comparison_distance=1000, **kwargs)[source]
neurd.neuron_searching.upstream_axon_width(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.upstream_node_has_label(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.upstream_node_is_apical_shaft(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.upstream_skeletal_length(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.width(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.width_jump(curr_limb, limb_name=None, width_name='no_bouton_median', width_name_backup='no_spine_median_mesh_center', width_name_backup_2='median_mesh_center', **kwargs)[source]

Purpose: To measure the width jump from the upstream node to the current node

Effect: For axon, just seemed to pick up on the short segments and ones that had boutons that were missed

neurd.neuron_searching.width_jump_from_upstream_min(curr_limb, limb_name=None, limb_branch_dict_restriction=None, **kwargs)[source]
neurd.neuron_searching.width_neuron(curr_branch, name=None, branch_name=None, **kwargs)[source]
neurd.neuron_searching.width_new(branch, limb_name=None, branch_name=None, width_new_name='no_spine_mean_mesh_center', width_new_name_backup='no_spine_median_mesh_center', **kwargs)[source]

neurd.neuron_simplification module

For functions that operate over the whole neuron object

neurd.neuron_simplification.all_concept_network_data_updated(limb_obj)[source]

Purpose: To revise the all concept network data for a limb object, assuming after the concept network has been reset

neurd.neuron_simplification.branch_idx_map_from_branches_to_delete_on_limb(limb_obj, branches_to_delete, verbose=False)[source]

Purpose: To generate a mapping dictionary from nodes to delete

Ex: from neurd import neuron_simplification as nsimp nsimp.branch_idx_map_from_branches_to_delete_on_limb(

limb_obj, branches_to_delete = [0,1,5], verbose = True

)

neurd.neuron_simplification.branching_simplification(neuron_obj, return_copy=True, plot_floating_end_nodes_limb_branch_dict=False, plot_final_neuron_floating_endpoints=False, return_before_combine_path_branches=False, plot_downstream_path_limb_branch=False, plot_final_neuron_path=False, verbose_merging=False, verbose=False, plot_after_simplification=False, **kwargs)[source]

Purpose: Total simplification of neuron object where 1) eliminates floating end nodes 2) simplifies path on neuron object

neurd.neuron_simplification.combine_path_branches(neuron_obj, plot_downstream_path_limb_branch=False, verbose=True, plot_final_neuron=False, return_copy=True)[source]

Purpose: To combine all branches that are along a non-branching path into one branch in neuron object

  1. Find all nodes with one downstream node (call ups)

  2. For each limb: combine the branches and pass back the ones to delete

  3. Delete all branches on limbs that need deletion and pass back neuron object

neurd.neuron_simplification.combine_path_branches_on_limb(limb_obj, one_downstream_node_branches=None, verbose=True, return_branches_to_delete=True, inplace=False)[source]

Purpose: To combine all branches that are along a

non-branching path into one branch FOR JUST ONE BRANCH

  1. Find all the nodes with only one child if not already passed

  2. Get all the children of ups (for that branch) and convert into one list

3) Find the connected components For each connected component:

  1. Order the branches from most upstream to least

  2. Determine the most upstream node

  3. Combine each d stream node sequentially with upstream node

  4. Add all downstream nodes to branches to delete

neurd.neuron_simplification.delete_branches_from_limb(neuron_obj, limb_idx, branches_to_delete, verbose=True)[source]

Purpose: To adjust a whole limb object after floating branch pieces or path branches combines (so there is not change to the whole limb mesh)

Pseudocode: 1) Find a mapping of old node names to new node names 2) Renmae the nodes in the concept network 3) Delete the nodes from the concept network 4) Fix all of the starting network info using the name map

neurd.neuron_simplification.delete_branches_from_neuron(neuron_obj, limb_branch_dict, plot_final_neuron=False, verbose=False, inplace=False)[source]

Purpose: To delete a limb_branch_dict from a neuron object if there is no mesh loss

Pseudocode:

neurd.neuron_simplification.floating_end_nodes_limb_branch(neuron_obj, limb_branch_dict_restriction='dendrite', width_max=300, max_skeletal_length=7000, min_distance_from_soma=10000, return_df=False, verbose=False, plot=False)[source]

Purpose: To find a limb branch dict of pieces that were probably stitched to the mesh but probably dont want splitting the skeleton

neurd.neuron_simplification.merge_floating_end_nodes_to_parent(neuron_obj, floating_end_nodes_limb_branch_dict=None, plot_floating_end_nodes_limb_branch_dict=False, add_merge_label=True, verbose=True, plot_final_neuron=False, return_copy=True, **kwargs)[source]

Purpose: To combine the floating end nodes with their parent branch

Psueodocode: 1) Find all the floating endnodes

For each limb and branch that is a floating endnode:
  1. Find the parent node

  2. Combine it with parent node

Create new limb object by deleteing all the end nodes

neurd.neuron_simplification.reset_concept_network_branch_endpoints(limb_obj, verbose=False)[source]

Purpose: To recalculate endpoints of branches on concept network

neurd.neuron_statistics module

neurd.neuron_statistics.angle_from_top(vector, vector_pointing_to_top=array([0, -1, 0]), verbose=True)[source]

Purpose: Will find the angle between a vector and the vector pointing towards the top of the volume

neurd.neuron_statistics.branch_stats_dict_from_df(df, limb_name, branch_idx)[source]
Ex: limb_df = nst.stats_df(neuron_obj,
functions_list=[eval(f”lu.{k}_{ns.limb_function_append_name}”)

for k in ctcu.branch_attrs_limb_based_for_G])

limb_name = “L0” branch_name = 4 nst.branch_stats_dict_from_df(limb_df,limb_name,branch_name)

neurd.neuron_statistics.branch_stats_over_limb_branch(neuron_obj, limb_branch_dict, features=('skeletal_length', 'width_with_spines', 'width_no_spines'), stats_to_compute=('mean', 'median', 'percentile_70'), verbose=False)[source]

Purpose: to compute some stats over a limb branch

Things want to find out about dendrites:

  • widths

  • lengths

and then summary statistics about it - mean/median - 70th percentile

neurd.neuron_statistics.centroid_stats_from_neuron_obj(neuron_obj, voxel_adjustment_vector=None, include_volume=True)[source]
neurd.neuron_statistics.child_angles(limb_obj, branch_idx, verbose=False, comparison_distance=1500)[source]

Purpose: To measure all of the angles betweent he children nodes

Psuedocode: 1) Get the downstream nodes –> if none or one then return empty dictionary

For all downstream nodes: 2) choose one of the downstream nodes and send to nru.find_sibling_child_skeleton_angle 3) create a dictionary with the nodes in a tuple as the key and the angle between them as the values

will error if more than 2 children current

Ex: nst.child_angles(limb_obj = neuron_obj[6], branch_idx = 22, verbose = False, )

neurd.neuron_statistics.children_axon_width(limb_obj, branch_idx, verbose=False, return_dict=True)[source]

Computes the axon width of all the children

neurd.neuron_statistics.children_axon_width_max(limb_obj, branch_idx, verbose=False, **kwargs)[source]
neurd.neuron_statistics.children_feature(limb_obj, branch_idx, feature_func, verbose=False, return_dict=True, **kwargs)[source]

To compute a feature over all of the children nodes of a traget branchn

neurd.neuron_statistics.children_skeletal_lengths(limb_obj, branch_idx, verbose=False, return_dict=True)[source]

Purpose: To generate the downstream skeletal lengths of all children

Pseudocode: 1) Find the downstream nodes 2) Compute the downstream skeletal length for each 3) return as dictionary

neurd.neuron_statistics.children_skeletal_lengths_min(limb_obj, branch_idx, verbose=False)[source]
neurd.neuron_statistics.compute_edge_attributes_around_node(G, edge_functions, edge_functions_args={}, nodes_to_compute=None, arguments_for_all_edge_functions=None, verbose=False, set_default_at_end=True, default_value_at_end=None, **kwargs)[source]

Purpose: To use all the edges around a node to compute edge features

neurd.neuron_statistics.compute_edge_attributes_globally(G, edge_functions, edges_to_compute=None, arguments_for_all_edge_functions=None, verbose=False, set_default_at_end=True, default_value_at_end=None, **kwargs)[source]

Purpose: to compute edge attributes that need the whole graph to be computed

neurd.neuron_statistics.compute_edge_attributes_locally(G, limb_obj, nodes_to_compute, edge_functions, arguments_for_all_edge_functions=None, verbose=False, directional=False, set_default_at_end=True, default_value_at_end=None, **kwargs)[source]

Purpose: To iterate over graph edges and compute edge properties and store

Pseudocode: For each nodes to compute:

get all of the edges for that node For each downstream partner:

For each function:

compute the value and store it in the edge

Ex: G = complete_graph_from_node_ids(all_branch_idx)

nodes_to_compute = [upstream_branch] edge_functions = dict(sk_angle=nst.parent_child_sk_angle,

width_diff = nst.width_diff,

width_diff_percentage = nst.width_diff_percentage)

compute_edge_attributes_between_nodes(G,

nodes_to_compute, edge_functions, verbose=True, directional = False)

neurd.neuron_statistics.compute_edge_attributes_locally_upstream_downstream(limb_obj, upstream_branch, downstream_branches, offset=1500, comparison_distance=2000, plot_extracted_skeletons=False, concept_network_comparison_distance=10000, synapse_density_diff_type='synapse_density_pre', n_synapses_diff_type='synapses_pre')[source]

To compute a graph storing the values for the edges between the nodes

neurd.neuron_statistics.compute_node_attributes(G, limb_obj, node_functions, verbose=False)[source]

Purpose: To Compute node attributes given: - function - arguments for function - nodes to compute for (so can explicitely do for upstream and downstream)

Each of this will be stored in a list of dictionaries

neurd.neuron_statistics.compute_node_attributes_upstream_downstream(G, limb_obj, upstream_branch, downstream_branches, node_functions=None, verbose=False)[source]

Purpose: To attach node properties to a graph that references branches on a limb

neurd.neuron_statistics.coordinates_function_list(coordinates=None)[source]
neurd.neuron_statistics.coordinates_stats_df(neuron_obj, coordinates=None, limb_branch_dict_restriction=None, verbose=False)[source]

Purpose: To create a dataframe of centers for a limb branch

neurd.neuron_statistics.distance_from_soma(limb_obj, branch_idx, include_node_skeleton_dist=False, verbose=False, **kwargs)[source]

Purpose: To find the distance away from the soma for a given set of branches

Ex: nst.distance_from_soma(limb_obj,190)

neurd.neuron_statistics.distance_from_soma_candidate(neuron_obj, candidate)[source]

Purpose: Will return the distance of a candidate

neurd.neuron_statistics.distance_from_soma_euclidean(limb_obj, branch_idx)[source]

Will return the euclidean distance of the upstream endpoint to the starting coordinate of the limb

Ex: branch_idx = 0 limb_obj = neuron_obj_proof[0] nst.distance_from_soma_euclidean(limb_obj,branch_idx)

neurd.neuron_statistics.downstream_dist_match_ref_vector_over_candidate(neuron_obj, candidate, verbose=False, max_angle=65, **kwargs)[source]

Purpose: Measure the amount of downstream branch length that is at a certain angle

1) Get all of the nodes that are downstream of all of the branches 2) Add up the amount of distance on each branch that matches the angle specified

Ex: nst.downstream_dist_match_ref_vector_over_candidate(neuron_obj,

candidate = winning_candidates[0],max_angle=65)

neurd.neuron_statistics.downstream_upstream_diff_of_most_downstream_syn(branch_obj, default_value=0)[source]

Purpose: Determine the difference between the closest downstream dist and the farthest upstream dist

Pseudocode: 1) Get the synapse with min of downstream dist 2) Get the difference between downstream dist and upstream dist 3) Return the difference

neurd.neuron_statistics.edges_to_delete_from_threshold_and_buffer(G, u, v, edge_attribute='sk_angle', threshold=45, buffer=15, verbose=False, **kwargs)[source]

4) Create definite pairs by looking for edges that meet: - match threshold - have buffer better than other edges ** for those edges, eliminate all edges on those 2 nodes except that edge

Pseudocode: Iterate through each edge: a) get the current weight of this edge b) get all the other edges that are touching the two nodes and their weights c) Run the following test on the edge:

  1. Is it in the match limit

  2. is it less than other edge weightbs by the buffer size

  1. If pass the tests then delete all of the other edges from the graph

Ex: edges_to_delete = edges_to_delete_from_threshold_and_buffer(G,

225, 226,

threshold=100, buffer= 13,

verbose = True)

neurd.neuron_statistics.edges_to_delete_on_node_above_threshold_if_one_below(G, node_edges, threshold, edge_attribute='sk_angle', verbose=False)[source]

Purpose: To mark edges that should be deleted if there is another node that is already below the threshold

Pseudocode: 1) Get the values of the attribute for all of the edges 2) Get the number of these values below the threshold 3) If at least one value below, then get the edges that are above the threshold, turn them into an edge_attribute dict and return

neurd.neuron_statistics.euclidean_distance_close_to_soma_limb_branch(neuron_obj, distance_threshold=10000, verbose=False, plot=False)[source]
neurd.neuron_statistics.euclidean_distance_farther_than_soma_limb_branch(neuron_obj, distance_threshold=10000, verbose=False, plot=False)[source]
neurd.neuron_statistics.euclidean_distance_from_soma_limb_branch(neuron_obj, less_than=False, distance_threshold=10000, endpoint_type='downstream', verbose=False, plot=False)[source]

Purpose: Find limb branch dict within or farther than a certain euclidean distance from all the soma pieces

Pseudocode: 1) get the upstream endpoints of all

neurd.neuron_statistics.farthest_dendrite_branch_from_soma(neuron_obj)[source]
neurd.neuron_statistics.farthest_distance_from_skeleton_to_mesh(obj, verbose=False, plot=False, **kwargs)[source]

Purposee: find the coordinate of the skeleton that has the longest closest distance to the mesh

Ex: farthest_distance_from_skeleton_to_mesh( branch_obj, verbose = True, plot = True )

neurd.neuron_statistics.features_from_neuron_skeleton_and_soma_center(neuron_obj, limb_branch_dict=None, neuron_obj_aligned=None, **kwargs)[source]
neurd.neuron_statistics.features_from_skeleton_and_soma_center(skeleton, soma_center, short_threshold=6000, long_threshold=100000, volume_divisor=1000000000000000, verbose=False, name_prefix=None, features_to_exclude=None, skeleton_aligned=None, in_um=True)[source]

Purpose: To calculate features about a skeleton representing a subset of the neuron ( features specifically in relation to soma)

neurd.neuron_statistics.filter_limbs_by_soma_starting_angle(neuron_obj, soma_angle, angle_less_than=True, verbose=False, return_int_names=True)[source]

Purpose: Will return the limb names that satisfy the soma angle requirement

Ex: nst.filter_limbs_by_soma_starting_angle(neuron_obj,60,verbose=True)

neurd.neuron_statistics.find_parent_child_skeleton_angle_upstream_downstream(limb_obj, branch_1_idx, branch_2_idx, branch_1_type='upstream', branch_2_type='downstream', verbose=False, offset=1500, min_comparison_distance=1000, comparison_distance=2000, skeleton_resolution=100, plot_extracted_skeletons=False, use_upstream_skeleton_restriction=True, use_downstream_skeleton_restriction=True, nodes_to_exclude=None, **kwargs)[source]

Purpose: to find the skeleton angle between a designated upstream and downstream branch

Ex: nru.find_parent_child_skeleton_angle_upstream_downstream(

limb_obj = neuron_obj[0],

branch_1_idx = 223, branch_2_idx = 224,

plot_extracted_skeletons = True

)

Ex: branch_idx = 140 nru.find_parent_child_skeleton_angle_upstream_downstream(limb_obj,

nru.upstream_node(limb_obj,branch_idx),branch_idx, verbose = True, plot_extracted_skeletons=True, comparison_distance=40000, use_upstream_skeleton_restriction=True)

neurd.neuron_statistics.fork_divergence(limb_obj, branch_idx, downstream_idxs=None, skeleton_distance=10000, error_not_2_downstream=True, total_downstream_skeleton_length_threshold=0, individual_branch_length_threshold=2000, skip_value=inf, plot_fork_skeleton=False, comparison_distance=400, skeletal_segment_size=40, plot_restrictions=False, combining_function=<function mean>, nodes_to_exclude=None, verbose=False)[source]

Purpose: To run the fork divergence the children of an upstream node

Pseudocode: 1) Get downstream nodes 2) Apply skeletal length restrictions if any 3) compute the fork divergence from the skeletons

neurd.neuron_statistics.fork_divergence_from_branch(limb_obj, branch_idx, verbose=False, error_not_2_downstream=True, total_downstream_skeleton_length_threshold=4000, individual_branch_length_threshold=3000, skip_value=inf, plot_fork_skeleton=False, upstream_sk_color='red', downstream_sk_colors=None, comparison_distance=400, skeletal_segment_size=40, plot_restrictions=False, combining_function=<function mean>, **kwargs)[source]

Purpose

Pseudocode: 1) Get the branch where the error is 2) Get all the upstream branch and all of the downstream branches of that 3) Measure the sibling angles 4) Collect together the skeletons in a list 5) Run the fork splitting function

Note: should only be done on 2 forks

neurd.neuron_statistics.fork_divergence_from_skeletons(upstream_skeleton, downstream_skeletons, downstream_starting_endpoint=None, comparison_distance=500, skeletal_segment_size=50, plot_restrictions=False, combining_function=<function sum>, verbose=False)[source]

Purpose: To compute the number for the fork splitting

Pseudocode: 1) Find intersection point of all 3 branches 2) for 2 downstream branch:

  • restrict the skeleton to a certain distance from the start

  1. discretize the skeeletons so have x pieces

  2. Measure distance between each indexed point

  3. Have one way of aggregating the distances (median, mean)

Application: If below a certain value then can indicate incorrect branching

Ex: from neurd import neuron_statistics as nst nst.fork_divergence(upstream_skeleton = upstream_sk,

downstream_skeletons = downstream_sk, comparison_distance = 500, skeletal_segment_size = 50, plot_restrictions = True, combining_function = np.mean)

neurd.neuron_statistics.fork_min_skeletal_distance(limb_obj, branch_idx, downstream_idxs=None, skeleton_distance=10000, error_not_2_downstream=True, total_downstream_skeleton_length_threshold=0, individual_branch_length_threshold=2000, skip_value=inf, comparison_distance=2000, offset=700, skeletal_segment_size=40, plot_skeleton_restriction=False, plot_min_pair=False, nodes_to_exclude=None, verbose=False)[source]

Purpose: To run the fork divergence the children of an upstream node

Pseudocode: 1) Get downstream nodes 2) Apply skeletal length restrictions if any 3) compute the fork skeleton min distance

Ex: from neurd import neuron_statistics as nst

upstream_branch = 68 downstream_branches = [55,64] verbose = False div = nst.fork_min_skeletal_distance(limb_obj,upstream_branch,

downstream_idxs = downstream_branches,

total_downstream_skeleton_length_threshold=0, individual_branch_length_threshold = 0,

plot_skeleton_restriction = False, verbose = verbose)

neurd.neuron_statistics.fork_min_skeletal_distance_from_skeletons(downstream_skeletons, comparison_distance=3000, offset=700, skeletal_segment_size=40, verbose=False, plot_skeleton_restriction=False, plot_min_pair=False)[source]

Purpose: To determine the min distance from two diverging skeletons with an offset

neurd.neuron_statistics.get_stat(obj, stat, **kwargs)[source]

Purpose: Will either run the function 0n the object of get the property of the function if it is a string

Ex: nst.get_stat(limb_obj[0],syu.n_synapses_pre) nst.get_stat(limb_obj[0],”skeletal_length”)

neurd.neuron_statistics.is_apical_shaft_in_downstream_branches(limb_obj, branch_idx, all_downstream_nodes=False, verbose=False, **kwargs)[source]

Ex: nst.is_apical_shaft_in_downstream_branches(neuron_obj[1],4,verbose = True)

neurd.neuron_statistics.is_axon_in_downstream_branches(limb_obj, branch_idx, all_downstream_nodes=False, verbose=False, **kwargs)[source]

Ex: nst.is_apical_shaft_in_downstream_branches(neuron_obj[1],4,verbose = True)

neurd.neuron_statistics.is_label_in_downstream_branches(limb_obj, branch_idx, label, all_downstream_nodes=False, verbose=False)[source]

Purpose: To test if a label is in the downstream nodes

  1. Get all the downstream labels

  2. return the test if a certain label is in downstream labels

Ex: nst.is_label_in_downstream_branches(neuron_obj[1],5,”apical_shaft”,verbose = True)

neurd.neuron_statistics.limb_branch_from_stats_df(df)[source]

Purpose: To convert a dataframe to a limb branch dict

neurd.neuron_statistics.max_layer_distance_above_soma_over_candidate(neuron_obj, candidate, **kwargs)[source]
neurd.neuron_statistics.max_layer_height_over_candidate(neuron_obj, candidate, **kwargs)[source]

Purpose: To determine the maximum height in the layer

neurd.neuron_statistics.min_synapse_dist_to_branch_point(limb_obj, branch_idx, downstream_branches=None, downstream_distance=0, default_value=inf, plot_closest_synapse=False, nodes_to_exclude=None, synapse_type=None, verbose=False)[source]

Purpose: To check if any of the synapses on the branch or downstream branches has a synapse close to the branching point

Pseudocode: 2) Get all of the downstream synapses (not including the current branch) 3) Get all of the distances upstream 4) Get the synapses for the current branch 5) Get all fo the distances dowsntream 6) Concatenate the distances 7) Find the minimum distance (if none then make inf)

Ex: from neurd import neuron_statistics as nst nst.min_synapse_dist_to_branch_point(limb_obj,

branch_idx = 16, downstream_distance = 0, default_value = np.inf, plot_closest_synapse = True, verbose = True)

neurd.neuron_statistics.n_small_children(limb_obj, branch_idx, width_maximum=80, verbose=False)[source]

Purpose: Will measure the number of small width immediate downstream nodes

Pseudocode: 1) Find the number of downstream nodes 2) Find the width of the downstream nodes 3) Count how many are below the threshold

Ex: from neurd import neuron_statistics as nst nst.n_small_children(limb_obj = neuron_obj[6],

branch_idx = 5, width_maximum = 80,

verbose = False)

neurd.neuron_statistics.n_synapses_diff(limb_obj, branch_1_idx, branch_2_idx, synapse_type='synapses', branch_1_direction='upstream', branch_2_direction='downstream', comparison_distance=10000, nodes_to_exclude=None, verbose=False, **kwargs)[source]

Purpose: Will return the different in number of synapses

neurd.neuron_statistics.n_synapses_downstream(limb_obj, branch_idx, **kwargs)[source]
neurd.neuron_statistics.n_synapses_downstream_total(limb_obj, branch_idx, **kwargs)[source]
neurd.neuron_statistics.n_synapses_downstream_within_dist(limb_obj, branch_idx, distance=5000, plot_synapses=False, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_offset_endpoint_upstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_post_downstream_within_dist(limb_obj, branch_idx, distance=5000, plot_synapses=False, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_pre_downstream_within_dist(limb_obj, branch_idx, distance=5000, plot_synapses=False, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_pre_offset_endpoint_upstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_spine_offset_endpoint_upstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_spine_within_distance_of_endpoint_downstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]
neurd.neuron_statistics.n_synapses_upstream(limb_obj, branch_idx, **kwargs)[source]
neurd.neuron_statistics.n_synapses_upstream_total(limb_obj, branch_idx, **kwargs)[source]
neurd.neuron_statistics.neuron_path_analysis(neuron_obj, N=3, plot_paths=False, return_dj_inserts=True, verbose=False)[source]

Pseudocode: 1) Get all the errored branches For Each Limb: 2) Remove the errored branches 3) Find all branches that are N steps away from starting node (and get the paths) 4) Filter away paths that do not have all degrees of 2 on directed network * Those are the viable paths we would analyze* 5) Extract the statistics

neurd.neuron_statistics.neuron_stats(neuron_obj, stats_to_ignore=None, include_skeletal_stats=False, include_centroids=False, voxel_adjustment_vector=None, cell_type_mode=False, **kwargs)[source]

Purpose: Will compute a wide range of statistics on a neurons object

neurd.neuron_statistics.node_functions_default(upstream_branch, downstream_branches)[source]

To create the defautl node attributes wanting to compute

neurd.neuron_statistics.parent_child_sk_angle(limb_obj, branch_1_idx, branch_2_idx, **kwargs)[source]
neurd.neuron_statistics.parent_width(limb_obj, branch_idx, width_func=None, verbose=False, **kwargs)[source]
neurd.neuron_statistics.ray_trace_perc(branch_obj, percentile=85)[source]
neurd.neuron_statistics.shortest_distance_from_soma_multi_soma(limb_obj, branches, somas=None, include_node_skeleton_dist=False, verbose=False, return_dict=False, **kwargs)[source]

Purpose: To find the distance of a branch from the soma (if there are multiple somas it will check for shortest distance between all of them)

Ex: nst.shortest_distance_from_soma_multi_soma(neuron_obj_exc_syn_sp[0],190)

neurd.neuron_statistics.sibling_sk_angle(limb_obj, branch_1_idx, branch_2_idx, **kwargs)[source]
neurd.neuron_statistics.skeletal_length_along_path(limb_obj, branch_path)[source]
neurd.neuron_statistics.skeletal_length_downstream(limb_obj, branch_idx, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_statistics.skeletal_length_downstream_total(limb_obj, branch_idx, nodes_to_exclude=None, include_branch_in_dist=True, **kwargs)[source]
neurd.neuron_statistics.skeletal_length_over_candidate(neuron_obj, candidate, **kwargs)[source]
neurd.neuron_statistics.skeletal_length_upstream(limb_obj, branch_idx, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_statistics.skeletal_length_upstream_total(limb_obj, branch_idx, nodes_to_exclude=None, include_branch_in_dist=True, **kwargs)[source]
neurd.neuron_statistics.skeleton_dist_match_ref_vector(limb_obj, branch_idx, max_angle=30, min_angle=None, reference_vector=array([0, -1, 0]), skeleton_resize_distance=8000, plot_branch=False, verbose=False, **kwargs)[source]

Purpose: To return the amount of skeletal distance that matches a comparison vector

Ex: nst.skeleton_dist_match_ref_vector(neuron_obj[0],

11, verbose = True)

neurd.neuron_statistics.skeleton_dist_match_ref_vector_sum_over_branches(limb_obj, branches, max_angle, min_angle=None, direction=None, verbose=False, **kwargs)[source]

Purpose: Find the amount of upstream skeletal distance that matches a certain angle

neurd.neuron_statistics.skeleton_dist_match_ref_vector_sum_over_branches_downstream(limb_obj, branches, max_angle, min_angle=None, verbose=False, **kwargs)[source]

Purpose: To find the skeletal length of the downstream branch portions that match a certain angle

Ex: nst.skeleton_dist_match_ref_vector_sum_over_branches_downstream( limb_obj = n_obj_2[6], branches = [23,14,27], max_angle = 10000, min_angle = 40, verbose = True)

neurd.neuron_statistics.skeleton_dist_match_ref_vector_sum_over_branches_upstream(limb_obj, branches, max_angle, min_angle=None, verbose=False, **kwargs)[source]
neurd.neuron_statistics.skeleton_downstream(limb_obj, branch_idx, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_statistics.skeleton_perc_dist_match_ref_vector(limb_obj, branch_idx, max_angle=30, min_angle=None, reference_vector=array([0, -1, 0]), skeleton_resize_distance=8000, plot_branch=False, verbose=False, **kwargs)[source]

Purpose: To find the percentage of skeleton that is within a certain angle of a comparison vector

Pseudocode: 1) Resize the branch skeleton and order the skeleton 2) For each segments of the skeleton: - extract the vector from the skeleton - find the angle betweeen the reference bector and the segment vector - if angle is below the threshold then count it as a match 3) return the percentage of the matches

neurd.neuron_statistics.skeleton_perc_match_ref_vector(limb_obj, branch_idx, max_angle=30, min_angle=None, reference_vector=array([0, -1, 0]), skeleton_resize_distance=8000, plot_branch=False, verbose=False, **kwargs)[source]

Purpose: To return the percentage of skeletal distance that matches a comparison vector

neurd.neuron_statistics.skeleton_stats_axon(neuron_obj, **kwargs)[source]
neurd.neuron_statistics.skeleton_stats_compartment(neuron_obj, compartment, include_compartmnet_prefix=True, include_centroids=False, **kwargs)[source]
neurd.neuron_statistics.skeleton_stats_dendrite(neuron_obj, **kwargs)[source]
neurd.neuron_statistics.skeleton_stats_from_neuron_obj(neuron_obj, include_centroids=True, voxel_adjustment_vector=None, verbose=False, limb_branch_dict=None, neuron_obj_aligned=None)[source]

Compute all the statistics for a neurons skeleton (should have only one soma)

neurd.neuron_statistics.skeleton_upstream(limb_obj, branch_idx, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_statistics.soma_distance_branch_set(neuron_obj, attr_name, attr_func)[source]

Purpose: Will set the skeletal distance to soma on each branch

Pseudocode: 1) iterate through all of the limbs and branches 2) Find the distnace from soma and store

neurd.neuron_statistics.soma_distance_euclidean_branch_set(neuron_obj, attr_name='soma_distance_euclidean')[source]
neurd.neuron_statistics.soma_distance_skeletal_branch_set(neuron_obj, attr_name='soma_distance_skeletal')[source]
neurd.neuron_statistics.soma_starting_angle(limb_obj=None, neuron_obj=None, limb_idx=None, soma_idx=0, soma_group_idx=None, soma_center=None, y_vector=array([0, -1, 0]))[source]
neurd.neuron_statistics.soma_starting_vector(limb_obj=None, neuron_obj=None, limb_idx=None, soma_idx=0, soma_group_idx=None, soma_center=None)[source]

Will find the angle between the vector pointing to the top of the volume and the angle from the soma center to the starting skeleton coordinate

neurd.neuron_statistics.stats_df(neuron_obj, functions_list=None, query=None, limb_branch_dict_restriction=None, function_kwargs=None, include_coordinates=False, coordinates=None, check_nans=False)[source]

Purpose: To return the stats on neuron branches that is used by the neuron searching to filter down

Ex: from neurd import neuron_statistics as nst

limb_obj = neuron_obj[6]

s_df = nst.stats_df(

neuron_obj, functions_list = [ns.width_new, ns.skeletal_length, ns.n_synapses_post_downstream], limb_branch_dict_restriction=dict(L6=limb_obj.get_branch_names())

)

s_df

neurd.neuron_statistics.stats_dict_over_limb_branch(neuron_obj, limb_branch_dict=None, stats_to_compute=('skeletal_length', 'area', 'mesh_volume', 'n_branches'))[source]

Purpose: To get a statistics over a limb branch dict

Stats to retrieve: 1) skeletal length 2) surface area 3) volume

Ex: from neurd import neuron_statistics as nst nst.stats_dict_over_limb_branch(

neuron_obj = neuron_obj_proof, limb_branch_dict = apu.apical_limb_branch_dict(neuron_obj_proof))

neurd.neuron_statistics.synapse_closer_to_downstream_endpoint_than_upstream(branch_obj)[source]

Purpose: Will indicate if there is a synapse that is closer to the downstream endpoint than upstream endpoint

neurd.neuron_statistics.synapse_density_diff(limb_obj, branch_1_idx, branch_2_idx, synapse_type='synapse_density', branch_1_direction='upstream', branch_2_direction='downstream', comparison_distance=10000, nodes_to_exclude=None, verbose=False, **kwargs)[source]

Purpose: Will return the different in number of synapses

neurd.neuron_statistics.synapse_density_offset_endpoint_upstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]
neurd.neuron_statistics.synapse_density_post_near_endpoint_downstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]

Purpose: To get the synapse density near the downstream endpoint

neurd.neuron_statistics.synapse_density_post_offset_endpoint_upstream(branch_obj, distance=6000, verbose=False, **kwargs)[source]
neurd.neuron_statistics.synapses_downstream(limb_obj, branch_idx, nodes_to_exclude=None, synapse_type='synapses', **kwargs)[source]
neurd.neuron_statistics.synapses_downstream_total(limb_obj, branch_idx, distance=inf, **kwargs)[source]
neurd.neuron_statistics.synapses_downstream_within_dist(limb_obj, branch_idx, synapse_type='synapses', distance=5000, plot_synapses=False, verbose=False, **kwargs)[source]

purpose: to find thenumber of downstream postsyns within a certain downstream distance

neurd.neuron_statistics.synapses_post_downstream_within_dist(limb_obj, branch_idx, distance=5000, plot_synapses=False, verbose=False, **kwargs)[source]
neurd.neuron_statistics.synapses_pre_downstream_within_dist(limb_obj, branch_idx, distance=5000, plot_synapses=False, verbose=False, **kwargs)[source]
neurd.neuron_statistics.synapses_upstream(limb_obj, branch_idx, nodes_to_exclude=None, synapse_type='synapses', **kwargs)[source]
neurd.neuron_statistics.synapses_upstream_total(limb_obj, branch_idx, distance=inf, **kwargs)[source]
neurd.neuron_statistics.total_upstream_skeletal_length(limb_obj, branch_idx, include_branch=False, **kwargs)[source]

Purpose: To get all of the skeleton length from current branch to starting branch

neurd.neuron_statistics.trajectory_angle_from_start_branch_and_subtree(limb_obj, subtree_branches, start_branch_idx=None, nodes_to_exclude=None, downstream_distance=10000, plot_skeleton_before_restriction=False, plot_skeleton_after_restriction=False, plot_skeleton_endpoints=False, return_max_min=False, return_n_angles=False, verbose=False)[source]

Purpose: To figure out the initial trajectory of a subtree of branches if given the initial branch of the subtree and all the branches of the subtree

Pseudocode: 1) Get all branches that are within a certain distance of the starting branch 2) Get the upstream coordinate of start branch 3) Restrict the skeleton to the downstream distance 4) Find all endpoints of the restricted skeleton 5) Calculate the vectors and angle from the top of the start coordinate and all the endpoints

Ex: nst.trajectory_angle_from_start_branch_and_subtree( limb_obj = neuron_obj_exc_syn_sp[limb_idx], start_branch_idx = 31, subtree_branches = [31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,

70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101],

nodes_to_exclude = nodes_to_exclude, plot_skeleton_endpoints = plot_skeleton_endpoints, return_max_min=True, return_n_angles=True

)

neurd.neuron_statistics.upstream_axon_width(limb_obj, branch_idx, default=inf, **kwargs)[source]

Purpose: To return the widh of the upstream branch

Psuedocode: 1) Get the upstream branch 2) return the width

neurd.neuron_statistics.upstream_endpoint_branch_set(neuron_obj, attr_name='upstream_endpoint')[source]
neurd.neuron_statistics.upstream_node_is_apical(limb_obj, branch_idx, verbose, **kwargs)[source]
neurd.neuron_statistics.upstream_node_is_apical_shaft(limb_obj, branch_idx, verbose, **kwargs)[source]
neurd.neuron_statistics.upstream_skeletal_length(limb_obj, branch_idx, default=inf, **kwargs)[source]

Purpose: To return the skeletal length of the upstream branch

Psuedocode: 1) Get the upstream branch 2) return the width

neurd.neuron_statistics.width_basic(branch_obj)[source]
neurd.neuron_statistics.width_diff(limb_obj, branch_1_idx, branch_2_idx, width_func=None, branch_1_direction='upstream', branch_2_direction='downstream', comparison_distance=10000, nodes_to_exclude=None, return_individual_widths=False, verbose=False)[source]
neurd.neuron_statistics.width_diff_basic(limb_obj, branch_1_idx, branch_2_idx, width_func=<function width_new>)[source]

Ex: from neurd import neuron_statistics as nst nst.width_diff_percentage_basic(n_obj_syn[0],1,2)

neurd.neuron_statistics.width_diff_percentage(limb_obj, branch_1_idx, branch_2_idx, width_func=None, branch_1_direction='upstream', branch_2_direction='downstream', comparison_distance=10000, nodes_to_exclude=None, verbose=False)[source]
neurd.neuron_statistics.width_diff_percentage_basic(limb_obj, branch_1_idx, branch_2_idx, width_func=<function width_new>, verbose=False)[source]

Ex: from neurd import neuron_statistics as nst nst.width_diff_percentage_basic(n_obj_syn[0],1,2)

neurd.neuron_statistics.width_downstream(limb_obj, branch_idx, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_statistics.width_max(limb_obj, branches_idxs, width_func=None)[source]
neurd.neuron_statistics.width_near_branch_endpoint(limb_obj, branch_idx, endpoint=None, offset=0, comparison_distance=2000, skeleton_segment_size=1000, verbose=False)[source]

Purpose: To compute the width of a branch around a comparison distance and offset of an endpoint on it’s skeleton

neurd.neuron_statistics.width_new(branch, width_new_name='no_spine_mean_mesh_center', width_new_name_backup='no_spine_median_mesh_center', **kwargs)[source]
neurd.neuron_statistics.width_over_candidate(neuron_obj, candidate, **kwargs)[source]
neurd.neuron_statistics.width_upstream(limb_obj, branch_idx, nodes_to_exclude=None, **kwargs)[source]
neurd.neuron_statistics.width_weighted_over_branches(limb_obj, branches, width_func=None, verbose=False)[source]

Purpose: Find weighted width over branches

Ex: nst.width_weighted_over_branches(n_obj_2[6],

branches = [24,2])

neurd.neuron_utils module

Purpose of this file: To help the development of the neuron object 1) Concept graph methods 2) Preprocessing pipeline for creating the neuron object from a meshs

neurd.neuron_utils.add_branch_label(neuron_obj, limb_branch_dict, labels)[source]

Purpose: Will go through and apply a label to the branches specified

neurd.neuron_utils.add_limb_branch_combined_name_to_df(df, limb_column='limb_idx', branch_column='branch_idx', limb_branch_column='limb_branch')[source]

Purpose: To add the limb_branch column to a dataframe

Pseudocode

neurd.neuron_utils.align_and_restrict_branch(base_branch, common_endpoint=None, width_name='no_spine_median_mesh_center', width_name_backup='no_spine_median_mesh_center', offset=500, comparison_distance=2000, skeleton_segment_size=1000, verbose=False)[source]
neurd.neuron_utils.align_array(array, align_matrix=None, **kwargs)[source]
neurd.neuron_utils.align_attribute(obj, attribute_name, soma_center=None, rotation=None, align_matrix=None)[source]
neurd.neuron_utils.align_mesh(mesh, align_matrix=None, **kwargs)[source]
neurd.neuron_utils.align_neuron_obj_from_align_matrix(neuron_obj, align_matrix=None, align_synapses=True, verbose=False, align_array=<function align_array>, align_mesh=<function align_mesh>, align_skeleton=<function align_skeleton>, in_place=False, **kwargs)[source]
neurd.neuron_utils.align_neuron_objs_at_soma(neuron_objs, center=None, plot=False, inplace=False, verbose=True)[source]

Purpose: Align two neuron objects at their soma

  1. Get the mesh centers of both

  2. Find the translation needed

  3. Adjust all attributes by that amount

neurd.neuron_utils.align_skeleton(skeleton, align_matrix=None, **kwargs)[source]
neurd.neuron_utils.all_concept_network_data_to_dict(all_concept_network_data)[source]
neurd.neuron_utils.all_concept_network_data_to_limb_network_stating_info(all_concept_network_data)[source]

Purpose: Will conver the concept network data list of dictionaries into a the dictionary representation of only the limb touching vertices and endpoints of the limb_network_stating_info in the preprocessed data

Pseudocode: Iterate through all of the network dicts and store as soma–> soma_group_idx –> dict(touching_verts,

endpoint)

stored in the concept network as touching_soma_vertices starting_coordinate

neurd.neuron_utils.all_donwstream_branches_from_limb_branch(neuron_obj, limb_branch_dict, include_limb_branch_dict=True, verbose=False, plot=False)[source]
neurd.neuron_utils.all_downstream_branches(limb_obj, branch_idx)[source]

Will return all of the branches that are downstream of the branch_idx

neurd.neuron_utils.all_downstream_branches_from_candidate(neuron_obj, candidate, include_candidate_branches=False, verbose=False)[source]

Purpose: To get all of the branches downstream of a candidate

neurd.neuron_utils.all_downstream_branches_from_multiple_branhes(limb_obj, branches_idx, include_branches_idx=True, verbose=False)[source]

Purpose: Get all of the downstream branches of certain other branches that would be removed if those branches were deleted

Ex: all_downstream_branches_from_multiple_branhes( neuron_obj[0], branches_idx=[20,24], )

neurd.neuron_utils.all_medain_mesh_center_widths(neuron_obj)[source]
neurd.neuron_utils.all_no_spine_median_mesh_center_widths(neuron_obj)[source]
neurd.neuron_utils.all_skeletal_lengths(neuron_obj)[source]
neurd.neuron_utils.all_soma_connnecting_endpionts_from_starting_info(starting_info)[source]
neurd.neuron_utils.all_soma_meshes_from_limb(neuron_obj, limb_idx, verbose=False)[source]
neurd.neuron_utils.all_soma_names_from_limb(limb_obj)[source]
neurd.neuron_utils.all_soma_soma_connections_from_limb(limb_obj, only_multi_soma_paths=False, verbose=False)[source]

Purpose: To return all the soma soma paths on a limb

Ex: segment_id = 864691136174988806

neuron_obj = du.neuron_obj_from_table(

segment_id = segment_id, table_name = “Decomposition”, verbose = False

)

nru.all_soma_soma_connections_from_limb(neuron_obj[0],

only_multi_soma_paths = True,

verbose = True, )

neurd.neuron_utils.all_starting_attr_by_limb_and_soma(curr_limb, soma_idx, attr='starting_node')[source]
neurd.neuron_utils.all_starting_dicts_by_soma(curr_limb, soma_idx)[source]
neurd.neuron_utils.apply_adaptive_mesh_correspondence_to_neuron(current_neuron, apply_sdf_filter=False, n_std_dev=1)[source]
neurd.neuron_utils.area_over_limb_branch(neuron_obj, limb_branch_dict, verbose=False)[source]

Ex: nru.skeletal_length_over_limb_branch(neuron_obj,

nru.limb_branch_from_candidate(ap_cand))

neurd.neuron_utils.axon_area(neuron_obj, units='um')[source]
neurd.neuron_utils.axon_length(neuron_obj, units='um')[source]
neurd.neuron_utils.axon_mesh(neuron_obj)[source]
neurd.neuron_utils.axon_only_group(limb_obj, branches, use_axon_like=True, verbose=False)[source]

checks group or branches and returns true if all are axon or axon-dependent

neurd.neuron_utils.axon_skeleton(neuron_obj)[source]
neurd.neuron_utils.boutons_above_thresholds(branch_obj, return_idx=False, **kwargs)[source]

To filter the boutons using some measurement

Example: ns.n_boutons_above_thresholds(neuron_obj_with_boutons[axon_limb_name][5],

faces=100,

ray_trace_percentile=200)

threshodls to set: “faces”,”ray_trace_percentile”

neurd.neuron_utils.branch_attr_dict_from_node(obj, node_name=None, attr_list=None, include_node_name_as_top_key=False, include_branch_dynamics=False, verbose=False)[source]

Purpose: To output a dictionary of attributes of the node attributes

Ex: nru.branch_attr_dict_from_node( neuron_obj_proof, “S0”, #attr_list=branch_attributes_global, attr_list = soma_attributes_global, include_node_name_as_top_key=True)

neurd.neuron_utils.branch_boundary_transition(curr_limb, edge, upstream_common_endpoint=None, downstream_common_endpoint=None, width_name='no_spine_median_mesh_center', width_name_backup='no_spine_median_mesh_center', offset=500, comparison_distance=2000, skeleton_segment_size=1000, return_skeletons=True, error_on_no_network_connection=False, verbose=False)[source]

Purpose: Will find the boundary skeletons and width average at the boundary with some specified boundary skeletal length (with an optional offset)

neurd.neuron_utils.branch_boundary_transition_old(curr_limb, edge, width_name='no_spine_median_mesh_center', width_name_backup='no_spine_median_mesh_center', offset=500, comparison_distance=2000, skeleton_segment_size=1000, return_skeletons=True, verbose=False)[source]

Purpose: Will find the boundary skeletons and width average at the boundary with some specified boundary skeletal length (with an optional offset)

neurd.neuron_utils.branch_mesh_no_spines(branch)[source]

Purpose: To return the branch mesh without any spines

neurd.neuron_utils.branch_neighbors(limb_obj, branch_idx, verbose=False, include_parent=True, include_siblings=False, include_children=True)[source]

Purpose: To get all the neighboring branches to current branch

neurd.neuron_utils.branch_neighbors_attribute(limb_obj, branch_idx, attr, verbose=False, **kwargs)[source]
neurd.neuron_utils.branch_neighbors_mesh(limb_obj, branch_idx, verbose=False, **kwargs)[source]
neurd.neuron_utils.branch_path_to_node(limb_obj, start_idx, destination_idx, include_branch_idx=False, include_last_branch_idx=True, skeletal_length_min=None, verbose=False, reverse_di_graph=True, starting_soma_for_di_graph=None)[source]

Purpose: Will find the branch objects on the path from current branch to the starting coordinate

Application: Will know what width objects to compare to for width jump

Pseudocode: 1) Get the starting coordinate of brnach 2) Find the shortest path from branch_idx to starting branch 3) Have option to include starting branch or not 3) If skeletal length threshold is set then: a. get skeletal length of all branches on path b. Filter out all branches that are not above the skeletal length threshold

Example: for k in limb_obj.get_branch_names():

nru.branch_path_to_start_node(limb_obj = neuron_obj[0], branch_idx = k, include_branch_idx = False, skeletal_length_min = 2000, verbose = False)

Ex: How to find path from one branch to another after starting from a certain soma

nru.branch_path_to_node(neuron_obj[0],

start_idx = 109, destination_idx = 164, starting_soma_for_di_graph = “S0”, include_branch_idx = True)

neurd.neuron_utils.branch_path_to_soma(limb_obj, branch_idx, plot=False)[source]
neurd.neuron_utils.branch_path_to_start_node(limb_obj, branch_idx, include_branch_idx=False, include_last_branch_idx=True, skeletal_length_min=None, verbose=False)[source]
neurd.neuron_utils.branch_skeletal_distance_from_soma(curr_limb, branch_idx, somas=None, dict_return=True, use_limb_copy=True, print_flag=False)[source]

Purpose: Will find the distance of a branch from the specified somas as measured by the skeletal distance

Pseudocode 1) Make a copy of the current limb 2) Get all of the somas that will be processed (either specified or by default will ) 3) For each soma, find the skeletal distance from that branch to that soma and save in dictioanry 4) if not asked to return dictionary then just return the minimum distance

neurd.neuron_utils.branches_at_high_degree_coordinates(limb_obj, min_degree_to_find=5, **kwargs)[source]

Purpose: To identify branches groups that are touching skeleton nodes that have nax_degree or more branches touching them

Pseudocode: 1) Find the coordinates wtih max_degree For each coordinate 2) Find branches that correspond to that coordinate and store as group

neurd.neuron_utils.branches_combined_mesh(limb_obj, branches, plot_mesh=False)[source]

To combine the mesh objects of branch indexes

Ex: branches_combined_mesh(limb_obj,branches=[45, 58, 61,66],

plot_mesh=True)

neurd.neuron_utils.branches_on_limb_after_edges_deleted_and_created(limb_obj, edges_to_delete=None, edges_to_create=None, return_removed_branches=False, verbose=False)[source]

Purpose: To take a edges of concept network that should be created or destroyed and then returning the branches that still remain and those that were deleted

neurd.neuron_utils.branches_to_concept_network(curr_branch_skeletons, starting_coordinate, starting_edge, touching_soma_vertices=None, soma_group_idx=None, starting_soma=None, max_iterations=1000000, verbose=False)[source]

Will change a list of branches into

neurd.neuron_utils.branches_within_skeletal_distance(limb_obj, start_branch, max_distance_from_start, verbose=False, include_start_branch_length=False, include_node_branch_length=False, only_consider_downstream=False)[source]

Purpose: to find nodes within a cetain skeletal distance of a certain node (can be restricted to only those downstream)

Pseudocode: 1) Get the directed concept grpah 2) Get all of the downstream nodes of the node 3) convert directed concept graph into an undirected one 4) Get a subgraph using all of the downstream nodes 5) For each node: - get the shortest path from the node to the starting node - add up the skeleton distance (have options for including each endpoint) - if below the max distance then add 6) Return nodes

Ex: start_branch = 53

viable_downstream_nodes = nru.branches_within_skeletal_distance(limb_obj = current_neuron[6],

start_branch = start_branch, max_distance_from_start = 50000, verbose = False, include_start_branch_length = False, include_node_branch_length = False, only_consider_downstream = True)

limb_branch_dict=dict(L6=viable_downstream_nodes+[start_branch])

nviz.plot_limb_branch_dict(current_neuron,

limb_branch_dict)

neurd.neuron_utils.calculate_decomposition_products(neuron_obj, store_in_obj=False, verbose=False)[source]
neurd.neuron_utils.calculate_spines_skeletal_length(neuron_obj)[source]
neurd.neuron_utils.candidate_from_branches(limb_obj, branches, limb_idx)[source]
neurd.neuron_utils.candidate_groups_from_limb_branch(neuron_obj, limb_branch_dict, print_candidates=False, connected_component_method='downstream', radius=20000, require_connected_components=False, plot_candidates=False, max_distance_from_soma_for_start_node=None, verbose=False, return_one=False)[source]

Purpose: To group a limb branch dict into a group of candidates based on upstream connectivity (leader of the group will be the most upstream member)

Ex: apical_candidates = nru.candidate_groups_from_limb_branch(neuron_obj,

{‘L0’: np.array([14, 11, 5])}, verbose = verbose,

print_candidates=print_candidates,

require_connected_components = True)

neurd.neuron_utils.candidate_limb_branch_dict_branch_intersection(candidate, limb_branch_dict, return_candidate=False, verbose=False)[source]

Purpose: To find which branches are in both the candidate and limb branch

neurd.neuron_utils.candidates_from_limb_branch_candidates(neuron_obj, limb_branch_candidates, verbose=False)[source]

Purpose: to convert a dictionary of all the candidates into a list of candidate dictionaries

Application: –original {1: array([list([0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 13, 16, 17, 18, 20, 21, 22, 23, 24, 25, 52, 53, 54]),

list([8, 11, 12, 14, 15, 19, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51])],

dtype=object),

2: array([list([1, 5, 6, 7, 8]), list([0, 10, 11, 12]), list([9, 2])],

dtype=object),

3: array([[0, 1, 2, 3, 4, 5, 6, 7]]), 5: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,

16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]])}

Output:

neurd.neuron_utils.check_concept_network(curr_limb_concept_network, closest_endpoint, curr_limb_divided_skeletons, print_flag=False, return_touching_piece=True, verbose=False)[source]
neurd.neuron_utils.check_points_inside_soma_bbox(neuron_obj, coordinates, soma_name='S0', voxel_adjustment=False, verbose=False)[source]

Purpose: Test if points are inside soma bounding box

neurd.neuron_utils.children_nodes(limb_obj, branch_idx, verbose=False)[source]

Purpose: to get the parent branch of a branch_idx

Ex: nru.children_nodes(limb_obj,7)

neurd.neuron_utils.classify_endpoint_error_branches_from_limb_concept_network(curr_concept_network, **kwargs)[source]

Purpose: To identify all endpoints of concept graph where the branch meshes/skeleton are likely a result of bad skeletonization or meshing:

Applications: Can get rid of these branches later

Pseudocode: 1) Get all of the endpoints of the concept network 2) Get all of the branch objects for the endpoints 3) Return the idx’s of the branch objects that test positive for being an error branch

neurd.neuron_utils.classify_error_branch(curr_branch, width_to_face_ratio=5)[source]
neurd.neuron_utils.classify_upstream_downsream(limb_obj, branch_list, verbose=False)[source]

Psuedocode: Given a list of branches that are all touching a certain coordinate, determine which of the branches are the upstream and which are the downstream

Pseudocode: 1) Pick the first branch 2) Get the sibling nodes 3) Get overlap, if no overlap between sibling nodes and rest of the group yes –> it is upstream –> get downstream by filtering out upstream no –> it is downstream –> get upstream by filtering out all of siblings and sel

neurd.neuron_utils.clean_all_concept_network_data(all_concept_network_data, verbose=False)[source]

Purpose: To make sure that there are no duplicate entries of that starting nodes and either to combine the soma touching points or just keep the largest one

Pseudocode: 1) Start with an empty dictionary For all the dictionaries: 2) store the result indexed by starting soma and starting node 3) If an entry already existent –> then either add the soma touching vertices (and unique) to the list or replace it if longer

4) Turn the one dictionary into a list of dictionaries like the all_concept_network_data attribute

  1. Replace the all_concept_network_data

neurd.neuron_utils.clean_neuron_all_concept_network_data(neuron_obj, verbose=False)[source]

Will go through and clean all of the concept network data in all the limbs of a Neuron

neurd.neuron_utils.clear_all_branch_labels(neuron_obj, labels_to_clear='all', limb_branch_dict=None)[source]
neurd.neuron_utils.clear_certain_branch_labels(neuron_obj, labels_to_clear, limb_branch_dict=None)[source]
neurd.neuron_utils.closest_branch_endpoint_to_limb_starting_coordinate(limb_obj, branches)[source]

Purpose: Will get the closest endpoints out of all the branches to the starting coordinate of limb

Pseudocode: 1) Get the limb graph and starting coordinate 2) Get the endpoints of all of the branches 3) Find the closest endpoint to the starting coordinate using the skeleton search function

Ex:

axon_limb_dict = neuron_obj.axon_limb_branch_dict axon_limb_name = list(axon_limb_dict.keys())[0]

limb_obj = neuron_obj[axon_limb_name] branches = axon_limb_dict[axon_limb_name]

nru.closest_branch_endpoint_to_limb_starting_coordinate(limb_obj,
branches,

)

neurd.neuron_utils.closest_branch_to_coordinates(neuron_obj, coordinates, original_mesh=None, original_mesh_kdtree=None, return_distances_to_limb_branch=False, return_closest_faces=False, verbose=False)

Purpose: To map a coordinate to the closest limb branch idx of a neuron object

Pseudocode: A) Create the mapping of original face idx to (limb,branch) B) Map Coordinate to the original face idx to get c) Find mapping of Coordinate to –> (limb,branch)

neurd.neuron_utils.combined_somas_neuron_obj(neuron_obj, inplace=True, plot_soma_mesh=False, plot_soma_limb_network=False, verbose=False)[source]

Purpose: To combine a neuron object with multiple somas into a neuron object with just one soma

Pseudocode: 1) Redo the preprocessing data

Inside: preprocessed_data soma_meshes: - just combine the meshes

soma_to_piece_connectivity: - just make it a combined dict: Ex: {0: [1, 2, 3, 5, 6, 7, 11], 1: [0, 4, 8], 2: [9, 10]}

soma_sdfs: just combine as weighted average

limb_network_stating_info - structure: limb_idx > soma_idx > starting_idx >

Goal: keep the same but just map to soma_idx = 0 and reorder the starting idx

  1. Redo the concept network

  2. Adjust starting info for all limbs

Ex: from neurd import neuron_utils as nru from neurd import neuron_utils as nru neuron_obj = nru.decompress_neuron(“./3502576426_somas_seperate.pbz2”,original_mesh=”./3502576426_0_25.off”)

neuron_obj_comb = nru.combined_somas_neuron_obj(neuron_obj,

inplace = False, verbose = True, plot_soma_limb_network = True)

neurd.neuron_utils.compartment_root_skeleton_angle_max(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compartment_root_skeleton_angle_min(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compartment_root_width_max(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compartment_root_width_min(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compartment_roots_stat(neuron_obj, compartment, stat_func, verbose=False, return_root_nodes=False, **kwargs)[source]

Purpose: To compute the statistic for all the root nodes of a certain compartment

neurd.neuron_utils.compartment_roots_stat_extrema(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)[source]

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compartment_roots_stat_max(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compartment_roots_stat_min(neuron_obj, compartment, stat_func, extrema='max', return_limb_branch_idx=False, verbose=False, **kwargs)

Purpose: to compute the extrema of all of the root statistics for a certain compartment

neurd.neuron_utils.compute_all_concept_network_data_from_limb(curr_limb, current_neuron_mesh, soma_meshes, soma_restriction=None, print_flag=False)[source]
neurd.neuron_utils.compute_feature_over_object(obj, feature_name)[source]
neurd.neuron_utils.compute_mesh_attribute_volume(branch_obj, mesh_attribute, max_hole_size=2000, self_itersect_faces=False)[source]
neurd.neuron_utils.concatenate_feature_over_limb_branch_dict(neuron_obj, limb_branch_dict, feature, feature_function=None)[source]

Purpose: To sum the value of some feature over the branches specified by the limb branch dict

neurd.neuron_utils.concept_network_data_from_soma(limb_obj, soma_name=None, soma_idx=None, soma_group_idx=None, data_name=None)[source]
neurd.neuron_utils.connected_components_from_branches(limb_obj, branches, use_concept_network_directional=False, verbose=False)[source]

Purpose: to find the connected components on a branch

neurd.neuron_utils.convert_concept_network_to_directional(concept_network, node_widths=None, no_cycles=True, suppress_disconnected_errors=False, verbose=False)[source]

Pseudocode: 0) Create a dictionary with the keys as all the nodes and empty list as values 1) Get the starting node 2) Find all neighbors of starting node 2b) Add the starting node to the list of all the nodes it is neighbors to 3) Add starter node to the “procesed_nodes” so it is not processed again 4) Add each neighboring node to the “to_be_processed” list

5) Start loop that will continue until “to_be_processed” is done a. Get the next node to be processed b. Get all neighbors c. For all nodes who are not currently in the curr_nodes’s list from the lookup dictionary –> add the curr_node to those neighbor nodes lists d. For all nodes not already in the to_be_processed or procesed_nodes, add them to the to_be_processed list … z. when no more nodes in to_be_processed list then reak

6) if the no_cycles option is selected: - for every neruong with multiple neurons in list, choose the one that has the branch width that closest matches

  1. convert the incoming edges dictionary to edge for a directional graph

Example of how to use:

example_concept_network = nx.from_edgelist([[1,2],[2,3],[3,4],[4,5],[2,5],[2,6]]) nx.draw(example_concept_network,with_labels=True) plt.show() xu.set_node_attributes_dict(example_concept_network,{1:dict(starting_coordinate=np.array([1,2,3]))})

directional_ex_concept_network = nru.convert_concept_network_to_directional(example_concept_network,no_cycles=True) nx.draw(directional_ex_concept_network,with_labels=True) plt.show()

node_widths = {1:0.5,2:0.61,3:0.73,4:0.88,5:.9,6:0.4} directional_ex_concept_network = nru.convert_concept_network_to_directional(example_concept_network,no_cycles=True,node_widths=node_widths) nx.draw(directional_ex_concept_network,with_labels=True) plt.show()

neurd.neuron_utils.convert_concept_network_to_skeleton(curr_concept_network)[source]
neurd.neuron_utils.convert_concept_network_to_undirectional(concept_network)[source]
neurd.neuron_utils.convert_int_names_to_string_names(limb_names, start_letter='L')[source]
neurd.neuron_utils.convert_limb_concept_network_to_neuron_skeleton(curr_concept_network, check_connected_component=True)[source]

Purpose: To take a concept network that has the branch data within it to the skeleton for that limb

Pseudocode: 1) Get the nodes names of the branches 2) Order the node names 3) For each node get the skeletons into an array 4) Stack the array 5) Want to check that skeleton is connected component

Example of how to run: full_skeleton = convert_limb_concept_network_to_neuron_skeleton(recovered_neuron.concept_network.nodes[“L1”][“data”].concept_network)

neurd.neuron_utils.convert_string_names_to_int_names(limb_names)[source]
neurd.neuron_utils.coordinate_to_offset_skeletons(limb_obj, coordinate, branches=None, offset=1500, comparison_distance=2000, plot_offset_skeletons=False, verbose=False, return_skeleton_endpoints=False)[source]

Will return the offset skeletons of branches that all intersect at a coordinate

neurd.neuron_utils.coordinates_to_closest_limb_branch(neuron_obj, coordinates, original_mesh=None, original_mesh_kdtree=None, return_distances_to_limb_branch=False, return_closest_faces=False, verbose=False)[source]

Purpose: To map a coordinate to the closest limb branch idx of a neuron object

Pseudocode: A) Create the mapping of original face idx to (limb,branch) B) Map Coordinate to the original face idx to get c) Find mapping of Coordinate to –> (limb,branch)

neurd.neuron_utils.copy_neuron(neuron_obj)[source]
neurd.neuron_utils.decompress_neuron(filepath, original_mesh, suppress_output=True, debug_time=False, using_original_mesh=True)[source]
neurd.neuron_utils.dendrite_mesh(neuron_obj)[source]
neurd.neuron_utils.dendrite_skeleton(neuron_obj)[source]
neurd.neuron_utils.distance_to_soma_from_coordinate_close_to_branch(neuron_obj, coordinate, limb_idx, branch_idx, limb_graph=None, destination_node=None)[source]

Purpose: To find the distance traced along the skeleton to the soma of a coordinate close to a specific branch on a limb of a neuron

neurd.neuron_utils.downstream_endpoint(limb_obj, branch_idx, verbose=False, return_endpoint_index=False)[source]
neurd.neuron_utils.downstream_labels(limb_obj, branch_idx, all_downstream_nodes=False, verbose=False)[source]

Purpose: Get all of the downstream labels of a node

Pseudocode: 1) Get all of the downstream nodes (optionally all downstream) 2) get the labels over all the branches 3) concatenate the labels

neurd.neuron_utils.downstream_nodes(limb_obj, branch)[source]
neurd.neuron_utils.empty_limb_object(labels=['empty'])[source]
neurd.neuron_utils.error_limb_indexes(neuron_obj)[source]
neurd.neuron_utils.error_limbs(neuron_obj)[source]

Purpose: Will return all of the

neurd.neuron_utils.feature_list_over_object(obj, feature_name)[source]

Purpose: Will compile a list of all of the

neurd.neuron_utils.feature_over_branches(limb_obj, branch_list, feature_name=None, feature_function=None, use_limb_obj_and_branch_idx=False, combining_function=None, verbose=False, **kwargs)[source]

To calculate a certain feature over all the branches in a list

neurd.neuron_utils.feature_over_limb_branch_dict(neuron_obj, limb_branch_dict, feature=None, feature_function=None, feature_from_fuction=None, feature_from_fuction_kwargs=None, keep_seperate=False, branch_func_instead_of_feature=None, skip_None=True)[source]

Purpose: To sum the value of some feature over the branches specified by the limb branch dict

neurd.neuron_utils.filter_away_neuron_limbs(neuron_obj, limb_idx_to_filter, plot_limbs_to_filter=False, verbose=False, in_place=False, plot_final_neuron=False)[source]

Purpose: To filter away limbs specific

Application: To filter away limbs that are below a certain skeletal length

Pseudocode: 1) Find the new mapping of the old limb idx to new limb idx 2) Create the new preprocessing dict of the neuron

soma_to_piece_connectivity limb_correspondence limb_meshes limb_mehses_face_idx limb_labels limb_concept_networks limb_network_stating_info

  1. Delete and rename the nodes of the graph

neurd.neuron_utils.filter_away_neuron_limbs_by_min_skeletal_length(neuron_obj, min_skeletal_length_limb=10000, verbose=False, plot_limbs_to_filter=False, in_place=False, plot_final_neuron=False)[source]

Purpose: To filter away neuron_limbs if below a certain skeletal length

neurd.neuron_utils.filter_branches_by_restriction_mesh(limb_obj, restriction_mesh, percentage_threshold=0.6, size_measure='faces', match_threshold=0.001, verbose=False)[source]

Purpose: To Find the branches that overlap with a restriction mesh up to a certain percentage

Purpose: To select a group of meshes from one other mesh based on matching threshold

Pseudocode:

0) Build a KDTree of the error mesh Iterate through all of the branches in that limb 1) Get the mesh of the branch 2) Map the branch mesh to the error mesh 3) Compute the percent match of faces 4) If above certain threshold then add to list

neurd.neuron_utils.filter_limb_branch_dict_by_limb(limb_branch_dict, limb_names, verbose=False)[source]

To filter a limb branch dict to only those limbs specified in the limb name

neurd.neuron_utils.filter_limbs_below_soma_percentile(neuron_obj, above_percentile=70, return_string_names=True, visualize_remianing_neuron=False, verbose=True)[source]

Purpose: Will only keep those limbs that have a mean touching vertices lower than the soma faces percentile specified

Pseudocode: 1) Get the soma mesh 2) Get all of the face midpoints 3) Get only the y coordinates of the face midpoints and turn negative 4) Get the x percentile of those y coordinates 5) Get all those faces above that percentage 6) Get those faces as a submesh and show

– How to cancel out the the limbs

neurd.neuron_utils.find_branch_with_specific_coordinate(limb_obj, coordinates)[source]

Purpose: To find all branch idxs whos skeleton contains a certain coordinate

neurd.neuron_utils.find_branch_with_specific_endpoint(limb_obj, coordinates)[source]

Purpose: To find all branch idxs whos skeleton contains a certain coordinate

neurd.neuron_utils.find_face_idx_and_check_recovery(original_mesh, submesh_list, print_flag=False, check_recovery=True)[source]
neurd.neuron_utils.find_parent_child_skeleton_angle(curr_limb_obj, child_node, parent_node=None, comparison_distance=3000, offset=0, verbose=False, check_upstream_network_connectivity=True, plot_extracted_skeletons=False, **kwargs)[source]
neurd.neuron_utils.find_sibling_child_skeleton_angle(curr_limb_obj, child_node, parent_node=None, comparison_distance=3000, offset=0, verbose=False)[source]
neurd.neuron_utils.generate_limb_concept_networks_from_global_connectivity(limb_correspondence, soma_meshes, soma_idx_connectivity, current_neuron, limb_to_soma_starting_endpoints=None, return_limb_labels=True)[source]

** Could significantly speed this up if better picked the periphery meshes (which now are sending all curr_limb_divided_meshes) sent to

tu.mesh_pieces_connectivity(main_mesh=current_neuron,

central_piece = curr_soma_mesh,

periphery_pieces=curr_limb_divided_meshes)


Purpose: To create concept networks for all of the skeletons

based on our knowledge of the mesh

Things it needs: - branch_mehses - branch skeletons - soma meshes - whole neuron - soma_to_piece_connectivity

What it returns: - concept networks - branch labels

Pseudocode: 1) Get all of the meshes for that limb (that were decomposed) 2) Use the entire neuron, the soma meshes and the list of meshes and find out shich one is touching the soma 3) With the one that is touching the soma, find the enpoints of the skeleton 4) Find the closest matching endpoint 5) Send the deocmposed skeleton branches to the branches_to_concept_network function 6) Graph the concept graph using the mesh centers

Example of Use:

from neurd import neuron neuron = reload(neuron)

#getting mesh and skeleton dictionaries limb_idx_to_branch_meshes_dict = dict() limb_idx_to_branch_skeletons_dict = dict() for k in limb_correspondence.keys():

limb_idx_to_branch_meshes_dict[k] = [limb_correspondence[k][j][“branch_mesh”] for j in limb_correspondence[k].keys()] limb_idx_to_branch_skeletons_dict[k] = [limb_correspondence[k][j][“branch_skeleton”] for j in limb_correspondence[k].keys()]

#getting the soma dictionaries soma_idx_to_mesh_dict = dict() for k,v in enumerate(current_mesh_data[0][“soma_meshes”]):

soma_idx_to_mesh_dict[k] = v

soma_idx_connectivity = current_mesh_data[0][“soma_to_piece_connectivity”]

limb_concept_networkx,limb_labels = neuron.generate_limb_concept_networks_from_global_connectivity(

limb_idx_to_branch_meshes_dict = limb_idx_to_branch_meshes_dict, limb_idx_to_branch_skeletons_dict = limb_idx_to_branch_skeletons_dict, soma_idx_to_mesh_dict = soma_idx_to_mesh_dict, soma_idx_connectivity = soma_idx_connectivity, current_neuron=current_neuron, return_limb_labels=True )

neurd.neuron_utils.get_limb_int_name(limb_name)[source]
neurd.neuron_utils.get_limb_names_from_concept_network(concept_network)[source]

Purpose: Function that takes in either a neuron object or the concept network and returns just the concept network depending on the input

neurd.neuron_utils.get_limb_starting_angle_dict(neuron_obj)[source]

Purpose: To return a dictionary mapping limb_idx –> soma_idx –> soma_group –> starting angle

Psuedocode: 1) Iterate through all of the limbs 2) Iterate through all of the starting dict information 3) compute the staritng angle 4) Save in a dictionary

neurd.neuron_utils.get_limb_string_name(limb_idx, start_letter='L')[source]
neurd.neuron_utils.get_limb_to_soma_border_vertices(current_neuron, print_flag=False)[source]

Purpose: To create a lookup dictionary indexed by - soma - limb name The will return the vertex coordinates on the border of the soma and limb

neurd.neuron_utils.get_matching_concept_network_data(limb_obj, soma_idx=None, soma_group_idx=None, starting_node=None, verbose=False)[source]
neurd.neuron_utils.get_soma_int_name(soma_name)[source]
neurd.neuron_utils.get_soma_meshes(neuron_obj)[source]
neurd.neuron_utils.get_soma_names_from_concept_network(concept_network)[source]

Purpose: Function that takes in either a neuron object or the concept network and returns just the concept network depending on the input

neurd.neuron_utils.get_soma_skeleton(current_neuron, soma_name)[source]

Purpose: to return the skeleton for a soma that goes from the soma center to all of the connecting limb

Pseudocode: 1) get all of the limbs connecting to the soma (through the concept network) 2) get the starting coordinate for that soma For all of the limbs connected 3) Make the soma center to that starting coordinate a segment

neurd.neuron_utils.get_soma_string_name(soma_idx, start_letter='S')[source]
neurd.neuron_utils.get_starting_info_from_concept_network(concept_networks)[source]

Purpose: To turn a dictionary that maps the soma indexes to a concept map into just a list of dictionaries with all the staring information

Ex input: concept_networks = {0:concept_network, 1:concept_network,}

Ex output: [dict(starting_soma=..,starting_node=..

starting_endpoints=…,starting_coordinate=…,touching_soma_vertices=…)]

Pseudocode: 1) get the soma it’s connect to 2) get the node that has the starting coordinate 3) get the endpoints and starting coordinate for that nodes

neurd.neuron_utils.get_starting_node_from_limb_concept_network(limb_obj)[source]
neurd.neuron_utils.get_whole_neuron_skeleton(current_neuron, check_connected_component=True, print_flag=False)[source]

Purpose: To generate the entire skeleton with limbs stitched to the somas of a neuron object

Example Use:

total_neuron_skeleton = nru.get_whole_neuron_skeleton(current_neuron = recovered_neuron) sk.graph_skeleton_and_mesh(other_meshes=[current_neuron.mesh],

other_skeletons = [total_neuron_skeleton])

Ex 2: nru = reload(nru) returned_skeleton = nru.get_whole_neuron_skeleton(recovered_neuron,print_flag=True) sk.graph_skeleton_and_mesh(other_skeletons=[returned_skeleton])

neurd.neuron_utils.high_degree_branching_coordinates_on_limb(limb_obj, min_degree_to_find=5, exactly_equal=False, verbose=False)[source]

Purpose: To find high degree coordinates on a limb

neurd.neuron_utils.high_degree_branching_coordinates_on_neuron(neuron_obj, min_degree_to_find=5, exactly_equal=False, verbose=False)[source]

Purpose: To find coordinate where high degree branching coordinates occur

neurd.neuron_utils.in_limb_branch_dict(limb_branch_dict, limb_idx, branch_idx=None, verbose=False)[source]

Will return true or false if limb and branch in limb branch dict

neurd.neuron_utils.is_branch_mesh_connected_to_neighborhood(limb_obj, branch_idx, verbose=False, plot=False, default_value=True)[source]

Purpose: Determine if a branch mesh has connectiviity to its neighborhood mesh

Pseudocode: 1) Get the neighborhood mesh 2) Find mesh connectivity 3) Return True if connected

Ex: limb_idx = 1 branch_idx = 10 limb_obj = neuron_obj[limb_idx] nru.is_branch_mesh_connected_to_neighborhood(

limb_obj, branch_idx, verbose = True

)

neurd.neuron_utils.is_branch_obj(obj)[source]

Determines if the object is a limb object

neurd.neuron_utils.is_limb_obj(obj)[source]

Determines if the object is a limb object

neurd.neuron_utils.is_neuron_obj(obj)[source]

Determines if the object is a limb object

neurd.neuron_utils.label_limb_branch_dict(neuron_obj, label, not_matching_labels=None, match_type='all')[source]
neurd.neuron_utils.limb_branch_after_limb_branch_removal(neuron_obj, limb_branch_dict, return_removed_limb_branch=False, verbose=False)[source]

Purpose: To take a branches that should be deleted from different limbs in a limb branch dict then to determine the leftover branches of each limb that are still connected to the starting node

Pseudocode: For each starting node 1) Get the starting node 2) Get the directional conept network and turn it undirected 3) Find the total branches that will be deleted and kept once the desired branches are removed (only keeping the ones still connected to the starting branch) 4) add the removed and kept branches to the running limb branch dict

neurd.neuron_utils.limb_branch_after_limb_edge_removal(neuron_obj, limb_edge_dict, return_removed_limb_branch=False, verbose=False)[source]

Purpose: To take a branches that should be deleted from different limbs in a limb branch dict then to determine the leftover branches of each limb that are still connected to the starting node

Pseudocode: For each starting node 1) Get the starting node 2) Get the directional conept network and turn it undirected 3) Find the total branches that will be deleted and kept once the desired branches are removed (only keeping the ones still connected to the starting branch) 4) add the removed and kept branches to the running limb branch dict

neurd.neuron_utils.limb_branch_combining(limb_branch_dict_list, combining_function, verbose=False)[source]

Purpose: To get every node that is not in limb branch dict

Ex: invert_limb_branch_dict(curr_neuron_obj,limb_branch_return,

verbose=True)

neurd.neuron_utils.limb_branch_dict_to_connected_components(neuron_obj, limb_branch_dict, use_concept_network_directional=False)[source]

Purpose: To turn the limb branch dict into a list of all the connected components described by the limb branch dict

neurd.neuron_utils.limb_branch_dict_to_faces(neuron_obj, limb_branch_dict)[source]

Purpose: To return the face indices of the main mesh that correspond to the limb/branches indicated by dictionary

Pseudocode: 0) Have a final face indices list

Iterate through all of the limbs
Iterate through all of the branches
  1. Get the original indices of the branch on main mesh

  2. Add to the list

  1. Concatenate List and return

ret_val = nru.limb_branch_dict_to_faces(neuron_obj,dict(L1=[0,1,2]))

neurd.neuron_utils.limb_branch_dict_to_limb_true_false_dict(neuron_obj, limb_branch_dict)[source]
To convert limb branch dict to a dictionary of:

limb_idx –> branch –> True or False

Pseudocode: 1) Iterate through the neuron limbs

a) if the limb is not in limb branch dict:

make the limb list empty

else:

get limb list

  1. Get the branch node names from neuron

  2. Get a diff of the list to find the false values

  3. Iterate through limb_list and make true,

  4. Iterate through diff list and make false

  5. store the local dictionary in the true_false dict for return

neurd.neuron_utils.limb_branch_dict_to_skeleton(neuron_obj, limb_branch_dict)[source]

Purpose: turn a limb_branch_dict into the corresponding skeleton of branches stacked together

Pseudocode: 1) Get the skeletons over the limb branch dict 2) Stack the skeletons

neurd.neuron_utils.limb_branch_dict_valid(neuron_obj, limb_branch_dict)[source]

Will convert a limb branch dict input with shortcuts (like “axon” or “all”) into a valid limb branch dict

Ex: limb_branch_dict_valid(neuron_obj,

limb_branch_dict = dict(L2=”all”,L3=[3,4,5]))

neurd.neuron_utils.limb_branch_face_idx_dict_from_neuron_obj_overlap_with_face_idx_on_reference_mesh(neuron_obj, mesh_reference, faces_idx=None, mesh_reference_kdtree=None, limb_branch_dict=None, overlap_percentage_threshold=5, return_limb_branch_dict=False, verbose=False)[source]

Purpose: Want to find a limb branch dict of branches that have a certain level of face overlap with given faces

Pseudocode: Generate a KDTree for the mesh_reference For each branch in limb branch:

  1. Get the faces corresponding to the mesh_reference

  2. Compute the percentage overlap with the faces_idx_list

c. If above certain threshold then store the limb,branch,face-list in the dictionary

return either the limb branch dict or limb-branch-facelist dict

neurd.neuron_utils.limb_branch_from_candidate(candidate)[source]
neurd.neuron_utils.limb_branch_from_edge_function(neuron_obj, edge_function, verbose=False, **kwargs)[source]

Purpose: To generate a limb branch dict of nodes from a function that generates cuts for a neuron_limb

Pseudocode: 1) Generate a limb_edge dictionary 2) Generate a limb branch dictionary and return that

neurd.neuron_utils.limb_branch_from_keywords(neuron_obj, limb_branch_dict)[source]

Purpose: To fill in the branches part of limb branch dict if used keywords instead of branches numbers

neurd.neuron_utils.limb_branch_from_limbs(neuron_obj, limbs)[source]

Purpose: To convert list of limbs to limb_branch_dict

Pseudocode: For each limb 1) Convert limb to name 2) Get the branches for the limb and store in dict

neurd.neuron_utils.limb_branch_get(limb_branch_dict, limb_name)[source]

Will get the branches associated with a certain limb idx or limb name (with checks for it not being there)

Ex: limb_idx = 0 short_thick_limb_branch = au.short_thick_branches_limb_branch_dict(neuron_obj_exc_syn_sp,

plot_limb_branch_dict = False)

nodes_to_exclude = nru.limb_branch_get(short_thick_limb_branch,limb_idx) nodes_to_exclude

neurd.neuron_utils.limb_branch_intersection(limb_branch_dict_list)[source]
neurd.neuron_utils.limb_branch_invert(neuron_obj, limb_branch_dict, verbose=False)[source]

Purpose: To get every node that is not in limb branch dict

Ex: invert_limb_branch_dict(curr_neuron_obj,limb_branch_return,

verbose=True)

neurd.neuron_utils.limb_branch_list_to_limb_branch_dict(limb_branch_list, verbose=False)[source]
neurd.neuron_utils.limb_branch_removed_after_limb_branch_removal(neuron_obj, limb_branch_dict, return_removed_limb_branch=False, verbose=False)[source]

Purpose: To take a branches that should be deleted from different limbs in a limb branch dict then to determine all of the branches that were removed from this deletion due to disconnecting from starting branch

neurd.neuron_utils.limb_branch_setdiff(limb_branch_dict_list)[source]
neurd.neuron_utils.limb_branch_str_names_from_limb_branch_dict(limb_branch_dict)[source]

Purpos: Creates names like

[‘L0_0’,

‘L0_1’, ‘L0_2’, ‘L0_3’, ‘L0_4’, ‘L0_5’, ‘L0_6’, ‘L0_7’,

neurd.neuron_utils.limb_branch_union(limb_branch_dict_list)[source]
neurd.neuron_utils.limb_correspondence_on_limb(limb_obj, width_name='width')[source]
neurd.neuron_utils.limb_correspondence_on_neuron(neuron_obj, **kwargs)[source]
neurd.neuron_utils.limb_edge_dict_with_function(neuron_obj, edge_function, verbose=False, **kwargs)[source]

Purpose: To create a limb_edge dictionary based on a function that generates cuts for a certain limb

Funciton must pass back: edges_to_create,edges_to_delete

Pseudocode: Iterate through all of the limbs of a neuron a. Get the cuts that should be created and deleted b. If either is non-empty then add to the limb_edge dictionary

return limb_edge dictionary

neurd.neuron_utils.limb_idx(name_input)[source]
neurd.neuron_utils.limb_label(name_input, force_int=True)[source]
neurd.neuron_utils.limb_mesh_from_branches(limb_obj, plot=False)[source]
neurd.neuron_utils.limb_to_soma_mapping(current_neuron)[source]

Purpose: Will create a mapping of limb –> soma_idx –> list of soma touching groups

neurd.neuron_utils.limb_true_false_dict_to_limb_branch_dict(neuron_obj, limb_true_false_dict)[source]

To convert a dictionary that has limb_idx –> branch –> True or False

Pseudocode: For each limb 1) Make sure that the true false dict lenght matches the number of branches Iterate through all the branches

  1. if true then add to local list

  1. store the local list in new limb branch dict

neurd.neuron_utils.low_branch_length_clusters(neuron_obj, max_skeletal_length=8000, min_n_nodes_in_cluster=4, width_max=None, skeletal_distance_from_soma_min=None, use_axon_like_restriction=False, verbose=False, remove_starting_node=True, limb_branch_dict_restriction=None, plot=False, **kwargs)[source]

Purpose: To find parts of neurons with lots of nodes close together on concept network with low branch length

Pseudocode: 1) Get the concept graph of a limb 2) Eliminate all of the nodes that are too long skeletal length 3) Divide the remaining axon into connected components - if too many nodes are in the connected component then it is an axon mess and should delete all those nodes

Application: Helps filter away axon mess

neurd.neuron_utils.max_limb_n_branches(neuron_obj)[source]
neurd.neuron_utils.max_limb_skeletal_length(neuron_obj)[source]
neurd.neuron_utils.max_soma_area(neuron_obj)[source]

Will find the largest number of faces out of all the somas

neurd.neuron_utils.max_soma_n_faces(neuron_obj)[source]

Will find the largest number of faces out of all the somas

neurd.neuron_utils.max_soma_volume(neuron_obj, divisor=1000000000)[source]

Will find the largest number of faces out of all the somas

neurd.neuron_utils.median_branch_length(neuron_obj)[source]
neurd.neuron_utils.mesh_not_in_neuron_branches(neuron_obj, plot=False)[source]

To figure out what part of the mesh is not incorporated into the branches

neurd.neuron_utils.mesh_over_candidate(neuron_obj, candidate, **kwargs)[source]

Ex: nru.mesh_over_candidate(neuron_obj,

apical_candidates[0],

plot_mesh = True)

neurd.neuron_utils.mesh_over_limb_branch_dict(neuron_obj, limb_branch_dict, combine_meshes=True, plot_mesh=False)[source]

Purpose: To collect the skeletons over a limb branch dict

nru.skeleton_over_limb_branch_dict(neuron_obj,

nru.limb_branch_from_candidate(apical_candidates[0]), plot_skeleton=True)

neurd.neuron_utils.mesh_without_boutons(obj)[source]
neurd.neuron_utils.mesh_without_mesh_attribute(obj, mesh_attribute)[source]

Purpose: To return the branch mesh without any spines

neurd.neuron_utils.min_width_upstream(limb_obj, branch_idx, skeletal_length_min=2000, default_value=10000, verbose=False, remove_first_branch=True, remove_zeros=True)[source]

Purpose: Find the width jump from the minimum of all of the branches proceeding

Pseudocode: 1) Get all of the nodes that proceed the branch 2) Find the minimum of these branches 3) Subtrack the minimum from the current branch width

neurd.neuron_utils.most_upstream_branch(limb_obj, branches, verbose=False)[source]

Purpose: To find the most upstream branch in a group of branches

Ex: most_upstream_branch(limb_obj,[ 2, 6, 20, 23, 24, 25, 26, 33])

neurd.neuron_utils.most_upstream_conn_comp_node(neuron_obj, limb_branch_dict=None, verbose=False)[source]

Purpose: Given a limb branch dict, find all of the root branches of the subgraphs

Pseudocode: iterating through all of the limbs of the limb branch 1) Divide into connected components

For each connected component: a) Find the most upstream node b) add to the list for this limb branch

Ex: nru.most_upstream_conn_comp_node_from_limb_branch_dict(

limb_branch_dict = n_obj_proof.basal_limb_branch_dict, neuron_obj = n_obj_proof, verbose = True,

)

neurd.neuron_utils.most_upstream_conn_comp_node_stat(neuron_obj, stat_func, limb_branch_dict=None, verbose=False, return_upstream_conn_comp_nodes=False, **kwargs)[source]

Purpose: calculate the statistic for the most upstream node of every connected component in a limb branch dict

neurd.neuron_utils.multi_soma_touching_limbs(neuron_obj)[source]
neurd.neuron_utils.n_boutons(neuron_obj)[source]
neurd.neuron_utils.n_branches(neuron_obj)[source]
neurd.neuron_utils.n_branches_over_limb_branch_dict(neuron_obj, limb_branch_dict)[source]

Purpose: to count up the number of branches in a compartment

nru.n_branches_over_limb_branch_dict(neuron_obj_proof,

apu.oblique_limb_branch_dict(neuron_obj_proof))

neurd.neuron_utils.n_branches_per_limb(neuron_obj)[source]
neurd.neuron_utils.n_downstream_nodes(limb_obj, branch)[source]
neurd.neuron_utils.n_error_limbs(neuron_obj)[source]
neurd.neuron_utils.n_limbs(neuron_obj)[source]
neurd.neuron_utils.n_somas(neuron_obj)[source]
neurd.neuron_utils.n_spine_eligible_branches(neuron_obj)[source]
neurd.neuron_utils.n_spines(neuron_obj, skeletal_length_max=None)[source]
neurd.neuron_utils.n_web(neuron_obj)[source]
neurd.neuron_utils.neighbor_endpoint(limb_obj, branch_idx, verbose=False, return_endpoint_index=False, neighbor_type='upstream')[source]

Pseudocode: 1) Find the upsream node 2a) if upstream node is None then use the current current starting node 2b) if upstream node, find the common skeleton point between the 2

Ex:

limb_obj = neuron_obj[2] for branch_idx in limb_obj.get_branch_names():

k = nru.upstream_endpoint(limb_obj = limb_obj, branch_idx = branch_idx, verbose = True,

return_endpoint_index = True)

total_dist = nst.total_upstream_skeletal_length(limb_obj,branch_idx) print(f”k = {k}”) print(f”total upstream dist = {total_dist}

“)

neurd.neuron_utils.neighborhood_mesh(limb_obj, branch_idx, verbose=False, plot=False, neighborhood_color='red', branch_color='blue')[source]

Purpose: To get the branch parent,siblings and children mesh around a mesh

Ex: neighborhood_mesh(

limb_obj, branch_idx, plot = True)

neurd.neuron_utils.neuron_limb_branch_dict(neuron_obj)[source]

Purpose: To develop a limb branch dict represnetation of the limbs and branchs of a neuron

neurd.neuron_utils.neuron_limb_overwrite(neuron_obj, limb_name, limb_obj)[source]

Purpose: to overwrite the limb object in a neuron with another limb object

neurd.neuron_utils.neuron_mesh_from_branches(neuron_obj, plot_mesh=False)[source]

Purpose: To reconstruct the mesh of neuron from all of the branch obejcts

Pseudocode: Iterate through all the limbs:

iterate through all the branches

add to big list

Add some to big list

concatenate list into mesh

neurd.neuron_utils.neuron_spine_density(neuron_obj, lower_width_bound=140, upper_width_bound=520, spine_threshold=2, skeletal_distance_threshold=110000, skeletal_length_threshold=15000, verbose=False, plot_candidate_branches=False, return_branch_processed_info=True, **kwargs)[source]

Purpose: To Calculate the spine density used to classify a neuron as one of the following categories based on the spine density of high interest branches

  1. no_spine

  2. sparsely_spine

  3. densely_spine

neurd.neuron_utils.non_axon_like_limb_branch_on_dendrite(n_obj, plot=False)[source]
neurd.neuron_utils.non_soma_touching_meshes_not_stitched(neuron_obj, return_meshes=True)[source]

Purpose: Find floating meshes not used

Pseudocode: 1) construct the neuron mesh from branches 2) restrict the non_soma touching pieces by the neuron_mesh 3) Return either the meshes or indexes

neurd.neuron_utils.non_soma_touching_meshes_stitched(neuron_obj, return_meshes=True)[source]

Purpose: Find floating meshes not used

Pseudocode: 1) construct the neuron mesh from branches 2) restrict the non_soma touching pieces by the neuron_mesh 3) Return either the meshes or indexes

neurd.neuron_utils.order_branches_by_skeletal_distance_from_soma(limb_obj, branches, verbose=False, closest_to_farthest=True)[source]

Purpose: To order branches from most upstream to most downstream accroding to skeletal distance from soma

Pseudocode: 1) Calculate the skeletal distance from soma for branches 2) Order and return

neurd.neuron_utils.ordered_endpoints_on_branch_path(limb_obj, path, starting_endpoint_coordinate)[source]

Purpose: To get the ordered endpoints of the skeletons of a path of branches starting at one endpoint

neurd.neuron_utils.original_mesh_face_to_limb_branch(neuron_obj, original_mesh=None, original_mesh_kdtree=None, add_soma_label=True, verbose=False)[source]

Pupose: To create a mapping from the original mesh faces to which limb and branch it corresponds to

Ex: original_mesh_face_idx_to_limb_branch = nru.original_mesh_face_to_limb_branch(neuron_obj,

original_mesh)

matching_faces = np.where((original_mesh_face_idx_to_limb_branch[:,0]==3) &

(original_mesh_face_idx_to_limb_branch[:,1]== 2))[0]

nviz.plot_objects(original_mesh.submesh([matching_faces],append=True))

neurd.neuron_utils.pair_branch_connected_components(limb_obj, branches=None, conn_comp=None, plot_conn_comp_before_combining=False, pair_method='skeleton_angle', match_threshold=70, thick_width_threshold=200, comparison_distance_thick=3000, offset_thick=1000, comparison_distance_thin=1500, offset_thin=0, plot_intermediates=False, verbose=False, **kwargs)[source]

Purpose: To pair branches of a subgraph together if they match skeleton angles or some other criteria

Application: for grouping red/blue splits together

Arguments: 1) Limb object 2) branches to check for connectivity (or the connected components precomputed)

Pseudocode: 0) Compute the connected components if not already done 1) For each connected component: a. Find path of connected component back to the starting node b. If path only of size 1 then just return either error branches or connected components c. Get the border error branch and the border parent branch from the path d) add the border error branch to a dictionary mapping parent to border branch and the conn comp it belongs to

2) For each parent branch: - if list is longer than 1

a. match the border error branches to each other to see if should be connected ( have argument to set the function to use for this) b. If have any matches then add the pairings to a list of lists, else add to a seperate list

  1. Use the pairings to create new connected components if any should be combined

Example: nru.pair_branch_connected_components(limb_obj=neuron_obj[1], branches = limb_branch_dict[“L1”], conn_comp = None, plot_conn_comp_before_combining = False, pair_method = “pair_all”,

verbose = True)

Example 2: nru.pair_branch_connected_components(limb_obj=neuron_obj[1], #branches = limb_branch_dict[“L1”], conn_comp = xu.connected_components(limb_obj.concept_network_directional.subgraph(limb_branch_dict[“L1”]),

),

plot_conn_comp_before_combining = False,

verbose = True)

neurd.neuron_utils.pair_branch_connected_components_by_common_upstream(limb_obj, conn_comp, verbose=False)[source]

Purpose: To group connected components of branches by a common upstream branch

Pseudocode: 1) For each connected component find the upstream branch and add the connected component to the dictionary book-keeping 2) combine all the connected components in the dictionary

Ex: nru.pair_branch_connected_components_by_common_upstream( neuron_obj[1], conn_comp = [[13], [14], [9, 12, 15, 16, 19], [51, 21, 22, 26, 27, 28], [34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 49, 50, 20, 23, 24, 25, 31], [32], [48, 33, 45]], verbose = True)

neurd.neuron_utils.pair_neuron_obj_to_nuclei(neuron_obj, soma_name, nucleus_ids, nucleus_centers, nuclei_distance_threshold=15000, return_matching_info=True, return_id_0_if_no_matches=True, return_nuclei_within_radius=False, return_inside_nuclei=False, verbose=False, default_nuclei_id=None)[source]

Pseudocode: 1) Get the Soma Center 2) Get all Nuclei within a certain distance of the Soma Center 3) If any Nuclei Found, Get the closest one and the distance 4) Get the number of nuceli within the bouding box: -if No Nuclei were found and one was found within the bounding box then use that one

neurd.neuron_utils.parent_node(limb_obj, branch_idx, verbose=False)[source]

Purpose: to get the parent branch of a branch_idx

neurd.neuron_utils.recalculate_endpoints_and_order_skeletons_for_branch(branch_obj)[source]
neurd.neuron_utils.recalculate_endpoints_and_order_skeletons_over_neuron(neuron_obj)[source]

Purpose: Recalculate endpoints and order the skeletons

neurd.neuron_utils.restrict_skeleton_from_start_plus_offset_downstream(limb_obj, branch_idx, start_coordinate=None, offset=500, comparison_distance=2000, skeleton_resolution=100, min_comparison_distance=1000, plot_skeleton=False, nodes_to_exclude=None, verbose=False)[source]

Purpose: To get the upstream skeleton using the new subgraph around node function

Pseudocode: 1) Get the upstream subgraph around the node that is a little more than the offset and comparison distance 2) Get the skeleton of all of the

Ex: nru.restrict_skeleton_from_start_plus_offset_downstream(limb_obj,97,

comparison_distance=100000,

plot_skeleton=True,

verbose=True)

neurd.neuron_utils.restrict_skeleton_from_start_plus_offset_upstream(limb_obj, branch_idx, start_coordinate=None, offset=500, comparison_distance=2000, skeleton_resolution=100, min_comparison_distance=1000, plot_skeleton=False, nodes_to_exclude=None, verbose=False)[source]

Purpose: To get the upstream skeleton using the new subgraph around node function

Pseudocode: 1) Get the upstream subgraph around the node that is a little more than the offset and comparison distance 2) Get the skeleton of all of the

neurd.neuron_utils.return_concept_network(current_neuron)[source]

Purpose: Function that takes in either a neuron object or the concept network and returns just the concept network depending on the input

neurd.neuron_utils.roots_stat(neuron_obj, stat_func, limb_branch_dict=None, verbose=False, return_upstream_conn_comp_nodes=False, **kwargs)

Purpose: calculate the statistic for the most upstream node of every connected component in a limb branch dict

neurd.neuron_utils.same_soma_multi_touching_limbs(neuron_obj, return_n_touches=False)[source]
neurd.neuron_utils.save_compressed_neuron(neuron_object, output_folder='./', file_name='', file_name_append=None, return_file_path=False, export_mesh=False)[source]
neurd.neuron_utils.sdf_filter(curr_branch, curr_limb, size_threshold=20, return_sdf_mean=False, ray_inter=None, n_std_dev=1)[source]

Purpose: to eliminate edge parts of meshes that should not be on the branch mesh correspondence

Pseudocode The filtering step (Have a size threshold for this maybe?): 1) Calculate the sdf values for all parts of the mesh 2) Restrict the faces to only thos under mean + 1.5*std_dev 3) split the mesh and only keep the biggest one

Example:

limb_idx = 0 branch_idx = 20 branch_idx = 3 #branch_idx=36 filtered_branch_mesh, filtered_branch_mesh_idx = sdf_filter(double_neuron_processed[limb_idx][branch_idx],double_neuron_processed[limb_idx],

n_std_dev=1)

filtered_branch_mesh.show()

neurd.neuron_utils.set_branch_attribute_over_neuron(neuron_obj, branch_func, verbose=False, **kwargs)[source]

Purpose: To set attributes of synapes throughout neuron

Psueodocde: Iterating through all branches 1) run the branch func

neurd.neuron_utils.set_preprocessed_data_from_limb_no_mesh_change(neuron_obj, limb_idx, limb_obj=None)[source]
neurd.neuron_utils.shared_skeleton_endpoints_for_connected_branches(limb_obj, branch_1, branch_2, verbose=False, check_concept_network_connectivity=True)[source]

Purpose: To find the shared skeleton endpoint of branches that are connected in the concept network

Ex: nru.shared_skeleton_endpoints_for_connected_branches(neuron_obj[5],

0,1, verbose=True)

neurd.neuron_utils.shortest_path(limb_obj, start_branch_idx, destiation_branch_idx, plot_path=False)[source]
neurd.neuron_utils.sibling_nodes(limb_obj, branch_idx, verbose=False)[source]

Purpose: to get the parent branch of a branch_idx

neurd.neuron_utils.skeletal_distance_from_soma(curr_limb, limb_name=None, somas=None, error_if_all_nodes_not_return=True, include_node_skeleton_dist=True, print_flag=False, branches=None, **kwargs)[source]

Purpose: To determine the skeletal distance away from a soma a branch piece is

Pseudocode: 0) Create dictionary that will store all of the results For each directional concept network 1) Find the starting node For each node: 1)find the shortest path from starting node to that node 2) convert the path into skeletal distance of each node and then add up 3) Map of each of distances to the node in a dictionary and return - replace a previous one if smaller

Example: skeletal_distance_from_soma(

limb_name = “L1” curr_limb = uncompressed_neuron.concept_network.nodes[limb_name][“data”] print_flag = True #soma_list=None somas = [0,1] check_all_nodes_in_return=True

)

neurd.neuron_utils.skeletal_length(neuron_obj)[source]
neurd.neuron_utils.skeletal_length_eligible(neuron_obj)[source]
neurd.neuron_utils.skeletal_length_over_candidate(neuron_obj, candidate, verbose=False)[source]
neurd.neuron_utils.skeletal_length_over_downstream_branches(limb_obj, branch_idx, combining_function=<function sum>, include_branch_skeletal_length=True, nodes_to_exclude=None, verbose=False)[source]

Will compute how much skeleton there is downstream of a certain node

neurd.neuron_utils.skeletal_length_over_limb_branch(neuron_obj, limb_branch_dict, verbose=False)[source]

Ex: nru.skeletal_length_over_limb_branch(neuron_obj,

nru.limb_branch_from_candidate(ap_cand))

neurd.neuron_utils.skeleton_coordinate_connecting_to_downstream_branches(limb_obj, branch_idx, return_downstream_branches=False, verbose=False)[source]

Psuedocode: 1) Will find the skeleton point that connects the current branch to the downstream branches

neurd.neuron_utils.skeleton_length_per_limb(neuron_obj)[source]
neurd.neuron_utils.skeleton_nodes_from_branches_on_limb(limb_obj, branches, **kwargs)[source]

Get skeleton nodes just from limb and list of branches

Ex: nru.skeleton_nodes_from_branches_on_limb(neuron_obj[0],[0,1,2],plot_nodes = True)

#checking nviz.plot_objects(

meshes = [neuron_obj[0][k].mesh for k in [0,1,2]], skeletons = [neuron_obj[0][k].skeleton for k in [0,1,2]]

)

neurd.neuron_utils.skeleton_nodes_from_limb_branch(neuron_obj, limb_branch_dict, downsample_size=1500, downsample_factor=None, plot_skeletons_before_downsampling=False, plot_nodes=False, scatter_size=0.2, verbose=False)[source]

Purpose: To convert a limb branch dict into a list of points from the skeleton (and have an option to downsample the number of skeletons)

downsample_facto

neurd.neuron_utils.skeleton_over_candidate(neuron_obj, candidate, **kwargs)[source]

Ex: nru.skeleton_over_candidate(neuron_obj,

apical_candidates[0],

plot_skeleton = False)

neurd.neuron_utils.skeleton_over_limb_branch_dict(neuron_obj, limb_branch_dict, stack_skeletons=True, plot_skeleton=False)[source]

Purpose: To collect the meshes over a limb branch dict

nru.mesh_over_limb_branch_dict(neuron_obj,

nru.limb_branch_from_candidate(apical_candidates[0]), plot_mesh=True)

neurd.neuron_utils.skeleton_points_along_path(limb_obj, branch_path, skeletal_distance_per_coordinate=2000, return_unique=True)[source]

Purpose: Will give skeleton coordinates for the endpoints of the branches along the specified path

if skeletal_distance_per_coordinate is None then will just endpoints

neurd.neuron_utils.skeleton_touching_branches(limb_obj, branch_idx, return_endpoint_groupings=True)[source]

Purpose: Can find all the branch numbers that touch a certain branch object based on the skeleton endpoints

neurd.neuron_utils.smaller_preprocessed_data(neuron_object, print_flag=False)[source]
neurd.neuron_utils.soma_centers(neuron_obj, soma_name=None, voxel_adjustment=False, voxel_adjustment_vector=None, return_int_form=True, return_single=True)[source]

Will come up with the centers predicted for each of the somas in the neuron

neurd.neuron_utils.soma_idx_and_group_from_name(soma_name)[source]
neurd.neuron_utils.soma_label(name_input, force_int=True)[source]
neurd.neuron_utils.spine_density(neuron_obj)[source]
neurd.neuron_utils.spine_density_eligible(neuron_obj)[source]
neurd.neuron_utils.spine_eligible_branch_lengths(neuron_obj)[source]
neurd.neuron_utils.spine_volume_density(neuron_obj)[source]
neurd.neuron_utils.spine_volume_density_eligible(neuron_obj)[source]
neurd.neuron_utils.spine_volume_median(neuron_obj)[source]
neurd.neuron_utils.spine_volume_per_branch_eligible(neuron_obj)[source]
neurd.neuron_utils.spines_per_branch(neuron_obj)[source]
neurd.neuron_utils.spines_per_branch_eligible(neuron_obj)[source]
neurd.neuron_utils.starting_node_combinations_of_limb_sorted_by_microns_midpoint(neuron_obj, limb_idx, only_multi_soma_paths=False, return_soma_names=False, verbose=False)[source]

Purpose: To sort the error connections of a limb by the distance of the soma to the midpoint of the microns dataset

Pseudocode: 0) Compute the distance of each some to the dataset midpoint 1) Get all of the possible connection pathways 2) Construct the distance matrix for the pathways 3) Order the connection pathways across their rows independent 4) Order the rows of the connections pathways 5) Filter for only different soma pathways if requested

neurd.neuron_utils.starting_node_from_soma(limb_obj, soma_name=None, soma_idx=None, soma_group_idx=None, data_name=None)[source]

Ex: nru.starting_node_from_soma(limb_obj,”S2_0”)

neurd.neuron_utils.statistic_per_branch(neuron_obj, stat_func, limb_branch_dict=None, suppress_errors=False, default_value=None)[source]

Purpose: Find a statistic for a limb branch dict

Pseudocode: 1)

neurd.neuron_utils.sum_feature_over_branches(limb_obj, branch_list, feature_name=None, feature_function=None, verbose=False)[source]
neurd.neuron_utils.sum_feature_over_limb_branch_dict(neuron_obj, limb_branch_dict, feature=None, branch_func_instead_of_feature=None, feature_function=None)[source]

Purpose: To sum the value of some feature over the branches specified by the limb branch dict

neurd.neuron_utils.synapse_skeletal_distances_to_soma(neuron_obj, synapse_coordinates, original_mesh=None, original_mesh_kdtree=None, verbose=False, scale='um')[source]

Purpose: To calculate the distance of synapses to the soma

Pseudocode: A) Create the mapping of original face idx to (limb,branch) B) Map Synapses to the original face idx to get

synapse –> (limb,branch)

  1. Calculate the limb skeleton graphs before hand

D) For each synapse coordinate: 1) Calculate the closest skeleton point on the (limb,branch) 2) Calculate distance from skeleton point to the starting coordinate of branch

** The soma distances that are -1 are usually the ones that are errored or are on the actual soma *

neurd.neuron_utils.total_spine_volume(neuron_obj)[source]
neurd.neuron_utils.translate_neuron_obj(neuron_obj, translation=None, new_center=None, in_place=False, verbose=False, plot_final_neuron=False, align_synapses=True, **kwargs)[source]

Purpose: To rotate all of the meshes and skeletons of a neuron object

Ex: neuron_obj_rot = copy.deepcopy(neuron_obj) mesh_center = neuron_obj[“S0”].mesh_center for i in range(0,10):

neuron_obj_rot = hvu.align_neuron_obj(neuron_obj_rot,

mesh_center=mesh_center, verbose =True)

nviz.visualize_neuron(

neuron_obj_rot,limb_branch_dict = “all”)

Ex: neuron_obj_1 = nru.translate_neuron_obj(

neuron_obj_h01_aligned, new_center=neuron_obj_m65[“S0”].mesh_center, plot_final_neuron = True)

neurd.neuron_utils.unalign_neuron_obj_from_align_matrix(neuron_obj, align_matrix=None, verbose=False, **kwargs)[source]
neurd.neuron_utils.upstream_downstream_endpoint_idx(limb_obj, branch_idx, verbose=False)[source]

To get the upstream and downstream endpoint idx returned (upstream_idx,downstream_idx)

neurd.neuron_utils.upstream_endpoint(limb_obj, branch_idx, verbose=False, return_endpoint_index=False)[source]

Purpose: To get the coordinate of the part of the skeleton connecting to the upstream branch

branch_idx = 263 nviz.plot_objects(main_mesh = neuron_obj[0].mesh,

skeletons=[neuron_obj[0][branch_idx].skeleton],
scatters=[nru.upstream_endpoint(neuron_obj[0],
branch_idx),

nru.downstream_endpoint(neuron_obj[0],

branch_idx)],

scatters_colors=[“red”,”blue”]

)

neurd.neuron_utils.upstream_labels(limb_obj, branch_idx, verbose=False)[source]

Purpose: Will find the labels of the upstream node

Pseudoode: 1) Find the upstream node 2) return labels of that node

neurd.neuron_utils.upstream_node(limb_obj, branch)[source]
neurd.neuron_utils.upstream_node_has_label(limb_obj, branch_idx, label, verbose)[source]

Purpose: To determine if the upstream node has a certain label

Pseudocode: 1) Find the upstream labels 2) Return boolean if label of interest is in labels

Ex: nru.upstream_node_has_label(limb_obj = n_test[1],

branch_idx = 9,

label = “apical”,

verbose = True)

neurd.neuron_utils.viable_axon_limbs_by_starting_angle(neuron_obj, soma_angle_threshold, above_threshold=False, soma_name='S0', return_int_name=True, verbose=False)[source]
neurd.neuron_utils.viable_axon_limbs_by_starting_angle_old(neuron_obj, axon_soma_angle_threshold=70, return_starting_angles=False)[source]

This is method that does not use neuron querying (becuase just simple iterating through limbs)

neurd.neuron_utils.volume_over_limb_branch(neuron_obj, limb_branch_dict, verbose=False)[source]

Ex: nru.skeletal_length_over_limb_branch(neuron_obj,

nru.limb_branch_from_candidate(ap_cand))

neurd.neuron_utils.whole_neuron_branch_concept_network(input_neuron, directional=True, limb_soma_touch_dictionary='all', with_data_in_nodes=True, print_flag=True)[source]

Purpose: To return the entire concept network with all of the limbs and somas connected of an entire neuron

Arguments: input_neuron: neuron object directional: If want a directional or undirectional concept_network returned limb_soma_touch_dictionary: a dictionary mapping the limb to the starting soma and soma_idx you want visualize if directional is chosen

This will visualize multiple somas and multiple soma touching groups Ex: {1:[{0:[0,1],1:[0]}]})

Pseudocode: 1) Get the soma subnetwork from the concept network of the neuron 2) For each limb network: - if directional: a) if no specific starting soma picked –> use the soma with the smallest index as starting one - if undirectional a2) if undirectional then just choose the concept network b) Rename all of the nodes to L#_# c) Add the network to the soma/total network and add an edge from the soma to the starting node (do so for all)

  1. Then take a subgraph of the concept network based on the nodes you want

  2. Send the subgraph to a function that graphs the networkx graph

neurd.neuron_utils.whole_neuron_branch_concept_network_old(input_neuron, directional=True, limb_soma_touch_dictionary=None, print_flag=False)[source]

Purpose: To return the entire concept network with all of the limbs and somas connected of an entire neuron

Arguments: input_neuron: neuron object directional: If want a directional or undirectional concept_network returned limb_soma_touch_dictionary: a dictionary mapping the limb to the starting soma you want it to start if directional option is set Ex: {“L1”:[0,1]})

Pseudocode: 1) Get the soma subnetwork from the concept network of the neuron 2) For each limb network: - if directional: a) if no specific starting soma picked –> use the soma with the smallest index as starting one - if undirectional a2) if undirectional then just choose the concept network b) Rename all of the nodes to L#_# c) Add the network to the soma/total network and add an edge from the soma to the starting node (do so for all)

  1. Then take a subgraph of the concept network based on the nodes you want

  2. Send the subgraph to a function that graphs the networkx graph

neurd.neuron_utils.width(branch_obj, axon_flag=None, width_name_backup='no_spine_median_mesh_center', width_name_backup_2='median_mesh_center', verbose=False)[source]

Will extract the width from a branch that tries different width types

neurd.neuron_utils.width_average_from_limb_correspondence(limb_correspondence, verbose=False)[source]

Purpose: To calculate the average width based on a limb correspondence dictionary of branch_idx > dict(width, skeleton, mesh)

neurd.neuron_utils.width_median(neuron_obj)[source]
neurd.neuron_utils.width_no_spine_median(neuron_obj)[source]
neurd.neuron_utils.width_no_spine_perc(neuron_obj, perc=90)[source]
neurd.neuron_utils.width_perc(neuron_obj, perc=90)[source]

neurd.neuron_visualizations module

neurd.neuron_visualizations.add_scatter_to_current_plot(scatters, scatters_colors, scatter_size=0.1)[source]
neurd.neuron_visualizations.limb_branch_dicts_to_combined_color_dict(limb_branch_dict_list, color_list)[source]

Purpose: Will combine multiple limb branch dict lists into one color dictionary of limb_name –> branch_name –> color

neurd.neuron_visualizations.limb_correspondence_plottable(limb_correspondence, mesh_name='branch_mesh', combine=False)[source]

Extracts the meshes and skeleton parts from limb correspondence so can be plotted

neurd.neuron_visualizations.plot_axon(neuron_obj, skeleton=False, plot_synapses=False, **kwargs)[source]
neurd.neuron_visualizations.plot_axon_merge_errors(neuron_obj)[source]
neurd.neuron_visualizations.plot_boutons(current_neuron, mesh_whole_neuron_alpha=0.1, mesh_whole_neuron_color='green', boutons_color='red', mesh_boutons_alpha=0.8, flip_y=True, plot_web=False, web_color='aqua', mesh_web_alpha=0.8, **kwargs)[source]
neurd.neuron_visualizations.plot_branch(branch_obj, upstream_color='yellow', downstream_color='aqua', verbose=True, **kwargs)[source]
neurd.neuron_visualizations.plot_branch_groupings(limb_obj, groupings, verbose=False, plot_meshes=True, plot_skeletons=True, extra_group=None, extra_group_color=None, extra_group_color_name='skipped')[source]

Purpose: To Plot branch objects all of a certain color that are in the same group, and the grouping is described with a graph

Pseudocode: 1) Get all the connected components (if a graph is given for the groupings) 2) Generate a color list for the groups 3) Based on what attributes are set, compile plottable lists (and add the colors to it) 4) Plot the branch objects

Ex: nviz.plot_branch_groupings(limb_obj = neuron_obj[0], groupings = G, verbose = False, plot_meshes = True, plot_skeletons = True)

neurd.neuron_visualizations.plot_branch_mesh_attribute(neuron_obj, mesh_attribute, mesh_color, mesh_alpha=0.8, return_vertices=True, flip_y=True, plot_at_end=True, verbose=False)[source]

Purpose: To take a mesh attribute that is part of a branch object inside of a neuron and then to plot all of them

Ex: nviz.plot_branch_mesh_attribute(neuron_obj_high_fid_axon,

mesh_attribute=”boutons”, mesh_color=”aqua”, mesh_alpha=0.8,

return_vertices = True,

plot_at_end=False, flip_y = True,

verbose = True)

neurd.neuron_visualizations.plot_branch_on_whole_mesh(neuron_obj, limb_idx, branch_idx, visualize_type=None, alpha=1, color='red', **kwargs)[source]

Will plot one branch with the background of whole neuron

neurd.neuron_visualizations.plot_branch_pieces(neuron_network, node_to_branch_dict, background_mesh=None, **kwargs)[source]
neurd.neuron_visualizations.plot_branch_spines(curr_branch, plot_skeletons=True, **kwargs)[source]
neurd.neuron_visualizations.plot_branch_with_boutons_old(branch_obj, bouton_color='red', non_bouton_color='aqua', main_mesh_color='green', non_bouton_size_filter=80, non_bouton_filtered_away_color='random', verbose=False)[source]

To visualize a branch object with the bouton information plotted

neurd.neuron_visualizations.plot_branch_with_neighbors(limb_obj, branch_idx, neighbor_idxs=None, branch_color='red', neighbors_color='blue', scatters_colors='yellow', scatter_size=1, visualize_type=['mesh', 'skeleton'], verbose=False, main_skeleton=None, skeletons=None, **kwargs)[source]

Will plot a main branch and other branches around it

Ex: nviz.plot_branch_with_neighbors(limb_obj,16,nru.downstream_nodes(limb_obj,16),

scatters=[nru.downstream_endpoint(limb_obj,16)], verbose = True)

neurd.neuron_visualizations.plot_branches_with_boutons(branches, plot_skeletons=True, verbose=True)[source]

To plot the branch meshes and their spines with information about them

neurd.neuron_visualizations.plot_branches_with_colors(limb_obj, branch_list, colors=None, verbose=True)[source]
neurd.neuron_visualizations.plot_branches_with_mesh_attribute(branches, mesh_attribute, plot_skeletons=True, verbose=True)[source]

To plot the branch meshes and their spines with information about them

neurd.neuron_visualizations.plot_branches_with_spines(branches, plot_skeletons=True, verbose=True)[source]

To plot the branch meshes and their spines with information about them

neurd.neuron_visualizations.plot_candidates(neuron_obj, candidates, color_list=None, mesh_color_alpha=1, visualize_type=['mesh'], verbose=False, dont_plot_if_no_candidates=True, **kwargs)[source]
neurd.neuron_visualizations.plot_compartments(neuron_obj, apical_color='blue', apical_shaft_color='aqua', apical_tuft_color='purple', basal_color='yellow', axon_color='red', oblique_color='green')[source]
neurd.neuron_visualizations.plot_concept_network(curr_concept_network, arrow_size=0.5, arrow_color='maroon', edge_color='black', node_color='red', scatter_size=0.1, starting_node_color='pink', show_at_end=True, append_figure=False, highlight_starting_node=True, starting_node_size=-1, flip_y=True, suppress_disconnected_errors=False)[source]
neurd.neuron_visualizations.plot_dendrite_and_synapses(neuron_obj, **kwargs)[source]
neurd.neuron_visualizations.plot_intermediates(limb_obj, branches, verbose=True)[source]

Purpose: To graph the skeletons

neurd.neuron_visualizations.plot_ipv_mesh(elephant_mesh_sub, color=[1.0, 0.0, 0.0, 0.2], flip_y=True)[source]
neurd.neuron_visualizations.plot_ipv_scatter(scatter_points, scatter_color=[1.0, 0.0, 0.0, 0.5], scatter_size=0.4, flip_y=True)[source]
neurd.neuron_visualizations.plot_ipv_skeleton(edge_coordinates, color=[0, 0.0, 1, 1], flip_y=True)[source]
neurd.neuron_visualizations.plot_labeled_limb_branch_dicts(neuron_obj, labels, colors='red', skeleton=False, mesh_alpha=1, print_color_map=True, **kwargs)[source]

Purpose: Will plot the limb branches for certain labels

Ex: nviz.plot_labeled_limb_branch_dicts(n_test,

[“apical”,”apical_shaft”,”axon”], [“blue”,”aqua”,”red”], )

neurd.neuron_visualizations.plot_limb(neuron_obj, limb_idx=None, limb_name=None, mesh_color_alpha=1)
neurd.neuron_visualizations.plot_limb_branch_dict(neuron_obj, limb_branch_dict, visualize_type=['mesh'], plot_random_color_map=False, color='red', alpha=1, dont_plot_if_empty=True, **kwargs)[source]

How to plot the color map along with: nviz.plot_limb_branch_dict(filt_neuron,

limb_branch_dict_to_cancel, plot_random_color_map=True)

neurd.neuron_visualizations.plot_limb_branch_dict_multiple(neuron_obj, limb_branch_dict_list, color_list=None, visualize_type=['mesh'], scatters_list=[], scatters_colors=None, scatter_size=0.1, mesh_color_alpha=0.2, verbose=False, mesh_whole_neuron=True, **kwargs)[source]

Purpose: to plot multiple limb branch dicts with scatter points associated with it

neurd.neuron_visualizations.plot_limb_concept_network_2D(neuron_obj, node_colors={}, limb_name=None, somas=None, starting_soma=None, starting_soma_group=None, default_color='green', node_size=2000, font_color='white', font_size=30, directional=True, print_flag=False, plot_somas=True, soma_color='red', pos=None, pos_width=3, width_min=0.3, width_noise_ampl=0.2, pos_vertical_gap=0.05, fig_width=40, fig_height=20, suppress_disconnected_errors=True, **kwargs)[source]

Purpose: To plot the concept network as a 2D networkx graph

Pseudocode: 0) If passed a neuron object then use the limb name to get the limb object - make copy of limb object 1) Get the somas that will be used for concept network 2) Assemble the network by concatenating (directional or undirectional) 3) Assemble the color list to be used for the coloring of the nodes. Will take: a. dictionary b. List c. Scalar value for all nodes

  1. Add on the soma to the graphs if asked for it

  2. Generate a hierarchical positioning for graph if position argument not specified

for all the starting somas 4) Use the nx.draw function

Ex: nviz = reload(nviz) xu = reload(xu) limb_idx = “L3” nviz.plot_limb_concept_network_2D(neuron_obj=uncompressed_neuron,

limb_name=limb_idx, node_colors=color_dictionary)

neurd.neuron_visualizations.plot_limb_correspondence(limb_correspondence, meshes_colors='random', skeleton_colors='random', mesh_name='branch_mesh', scatters=[], scatter_size=0.3, **kwargs)[source]
neurd.neuron_visualizations.plot_limb_correspondence_multiple(limb_correspondence_list, color_list=None, verbose=False, **kwargs)[source]
neurd.neuron_visualizations.plot_limb_idx(neuron_obj, limb_idx=None, limb_name=None, mesh_color_alpha=1)
neurd.neuron_visualizations.plot_limb_path(limb_obj, path, **kwargs)[source]

Purpose: To highlight the nodes on a path with just given a limb object

Pseudocode: 1) Get the entire limb mesh will be the main mesh 2) Get the meshes corresponding to the path 3) Get all of the skeletons 4) plot

neurd.neuron_visualizations.plot_merge_filter_suggestions(original_mesh, merge_valid_error_suggestions=None, neuron_obj=None, merge_error_types=None, plot_valid_error_coordinates=True, valid_color='blue', error_color='red', print_merge_color_map=True)[source]

Purpose: To plot the valid/error suggestions generated from

neurd.neuron_visualizations.plot_mesh_face_idx(mesh, face_idx, meshes_colors='random', **kwargs)[source]

To plot a mesh divided up by a face_mesh_idx

Ex: nviz.plot_mesh_face_idx(neuron_obj[0][0].mesh,return_face_idx)

neurd.neuron_visualizations.plot_meshes_skeletons(meshes, skeletons, **kwargs)[source]
neurd.neuron_visualizations.plot_original_vs_proofread(original, proofread, original_color='red', proofread_color='blue', mesh_alpha=1, plot_mesh=True, plot_skeleton=False)[source]

Purpose: To visualize the original version and proofread version of a neuron_obj

Pseudocode: 1) Turn original neuron and the proofread neuron into meshes 2) Plot both meshes

Ex: nviz.plot_original_vs_proofread(original = neuron_obj,

proofread = filtered_neuron, original_color = “red”, proofread_color = “blue”, mesh_alpha = 0.3, plot_mesh= True, plot_skeleton = True)

neurd.neuron_visualizations.plot_soma_extraction_meshes(mesh, soma_meshes, glia_meshes=None, nuclei_meshes=None, soma_color='red', glia_color='aqua', nuclei_color='black', verbose=False)[source]

Purpose: To plot the dataproducts from the soma extractio

neurd.neuron_visualizations.plot_soma_limb_concept_network(neuron_obj, soma_color='red', limb_color='aqua', multi_touch_color='brown', node_size=500, font_color='black', node_colors={}, **kwargs)[source]

Purpose: To plot the connectivity of the soma and the meshes in the neuron

How it was developed:

from datasci_tools import networkx_utils as xu xu = reload(xu) node_list = xu.get_node_list(my_neuron.concept_network) node_list_colors = [“red” if “S” in n else “blue” for n in node_list] nx.draw(my_neuron.concept_network,with_labels=True,node_color=node_list_colors,

font_color=”white”,node_size=500)

neurd.neuron_visualizations.plot_soma_meshes(neuron_obj, meshes_colors=None, verbose=False, **kwargs)[source]
neurd.neuron_visualizations.plot_spines(current_neuron, mesh_whole_neuron_alpha=0.1, mesh_whole_neuron_color='green', mesh_spines_alpha=0.8, spine_color='aqua', flip_y=True, **kwargs)[source]
neurd.neuron_visualizations.plot_spines_head_neck(neuron_obj, **kwargs)[source]
neurd.neuron_visualizations.plot_split_suggestions_per_limb(neuron_obj, limb_results, scatter_color='red', scatter_alpha=0.3, scatter_size=0.3, mesh_color_alpha=0.2, add_components_colors=True, component_colors='random')[source]
neurd.neuron_visualizations.plot_synapses(neuron_obj, **kwargs)[source]
neurd.neuron_visualizations.plot_valid_error_synapses(neuron_obj, synapse_dict, synapse_scatter_size=0.2, valid_presyn_color='yellow', valid_postsyn_color='aqua', error_presyn_color='black', error_postsyn_color='orange', error_presyn_non_axon_color='brown', meshes=None, meshes_colors=None, scatter_size=None, scatters=None, scatters_colors=None, plot_error_synapses=False, mesh_alpha=0.2, main_mesh_alpha=0.2, **kwargs)[source]

Plot Neuron along with the presyn and postsyn errored synapses

synapse_dict must have the following keys: valid_syn_centers_presyn errored_syn_centers_presyn valid_syn_centers_postsyn errored_syn_centers_postsyn

neurd.neuron_visualizations.plot_web_intersection(neuron_obj, limb_idx, branch_idx, parent_color='yellow', downstream_color='pink', web_color='purple', mesh_alpha=1, print_web_info=True, plot_boutons=True, plot_whole_limb=False, whole_limb_color='green', whole_limb_alpha=0.2, mesh_boutons_color='aqua', verbose=False, **kwargs)[source]

To plot the webbing of a branch at it’s intersection

Pseudocode: 1) Get the downstream nodes of the branch 2) Assemble the meshes of the parent and downstream branch 3) If requested, get all of the bouton meshes 4) Get the web mesh of parent node 5) Plot

neurd.neuron_visualizations.plottable_from_branches(limb_obj, branch_list, attributes)[source]
neurd.neuron_visualizations.plottable_meshes(limb_obj, branch_list)[source]
neurd.neuron_visualizations.plottable_meshes_skeletons(limb_obj, branch_list)[source]
neurd.neuron_visualizations.plottable_skeletons(limb_obj, branch_list)[source]
neurd.neuron_visualizations.set_zoom(center_coordinate, radius=None, radius_xyz=None, show_at_end=False, flip_y=True, turn_axis_on=False)[source]
neurd.neuron_visualizations.set_zoom_to_limb_branch(neuron_obj, limb_idx, branch_idx, radius=3000, turn_axis_on=True)[source]
neurd.neuron_visualizations.vector_to_scatter_line(vector, start_coordainte, distance_to_plot=2000, n_points=20)[source]

Will turn a vector into a sequence of scatter points to be graphed

neurd.neuron_visualizations.visualize_axon_dendrite(neuron_obj, axon_color='black', dendrite_color='aqua', plot_mesh=True, plot_skeleton=True)[source]

Purpose: To visualize the axon and dendrite of a neuron

Pseudocode: 1) Get the axon and dendrite limb branches 2) Construct an overall limb branch using the axon-dnedrite colors 3) plot neuron

neurd.neuron_visualizations.visualize_branch_at_downstream_split(neuron_obj, limb_idx, branch_idx, radius=20000, turn_axis_on=True, branch_color='mediumblue', downstream_color='red', print_axon_border_info=True, verbose=True, **kwargs)[source]

Purpose: To zoom on the point at which a branch splits off

Ex: axon_limb_name = neuron_obj.axon_limb_name curr_idx = 1 curr_border_idx = border_brnaches[curr_idx] nviz.visualize_branch_at_downstream_split(neuron_obj=neuron_obj,

limb_idx=neuron_obj.axon_limb_name, branch_idx=curr_border_idx,

radius = 20000, branch_color = “mediumblue”, downstream_color = “red”, print_border_info = True, verbose = True)

neurd.neuron_visualizations.visualize_concept_map(curr_concept_network, node_color='red', node_alpha=0.5, edge_color='black', node_size=0.1, starting_node=True, starting_node_size=0.3, starting_node_color='pink', starting_node_alpha=0.8, arrow_color='brown', arrow_alpha=0.8, arrow_size=0.5, arrow_color_reciprocal='brown', arrow_alpha_reciprocal=0.8, arrow_size_reciprocal=0.5, show_at_end=True, append_figure=False, print_flag=False, flip_y=True)[source]

Purpose: To plot a concept network with more parameters than previous plot_concept_network

Ex:

neuron = reload(neuron) recovered_neuron = neuron.Neuron(recovered_neuron) nru = reload(nru) nviz = reload(nviz) returned_network = nru.whole_neuron_branch_concept_network(recovered_neuron,

directional=True,

limb_soma_touch_dictionary = “all”, print_flag = False)

nviz.visualize_concept_map(returned_network,

#starting_node_size = 10, arrow_color = “green”)

neurd.neuron_visualizations.visualize_limb_obj(limb_obj, meshes_colors='random', skeletons_colors='random', plot_soma_vertices=True, soma_vertices_size=0.3, plot_starting_coordinate=False, starting_coordinate_size=1)[source]

purpose: To visualize just a limb object

neurd.neuron_visualizations.visualize_neuron(input_neuron, visualize_type=['mesh', 'skeleton'], limb_branch_dict={'L0': []}, mesh_configuration_dict={}, mesh_limb_branch_dict=None, mesh_resolution='branch', mesh_color_grouping='branch', mesh_color='random', mesh_fill_color='brown', mesh_color_alpha=0.2, mesh_soma=True, mesh_soma_color='red', mesh_soma_alpha=0.2, mesh_whole_neuron=False, mesh_whole_neuron_color='green', mesh_whole_neuron_alpha=0.2, subtract_from_main_mesh=True, mesh_spines=False, mesh_spines_color='red', mesh_spines_alpha=0.8, mesh_boutons=False, mesh_boutons_color='aqua', mesh_boutons_alpha=0.8, mesh_web=False, mesh_web_color='pink', mesh_web_alpha=0.8, skeleton_configuration_dict={}, skeleton_limb_branch_dict=None, skeleton_resolution='branch', skeleton_color_grouping='branch', skeleton_color='random', skeleton_color_alpha=1, skeleton_soma=True, skeleton_fill_color='green', skeleton_soma_color='red', skeleton_soma_alpha=1, skeleton_whole_neuron=False, skeleton_whole_neuron_color='blue', skeleton_whole_neuron_alpha=1, network_configuration_dict={}, network_limb_branch_dict=None, network_resolution='branch', network_color_grouping='branch', network_color='random', network_color_alpha=0.5, network_soma=True, network_fill_color='brown', network_soma_color='red', network_soma_alpha=0.5, network_whole_neuron=False, network_whole_neuron_color='black', network_whole_neuron_alpha=0.5, network_whole_neuron_node_size=0.15, network_directional=True, limb_to_starting_soma='all', edge_color='black', node_size=0.15, starting_node=True, starting_node_size=0.3, starting_node_color='pink', starting_node_alpha=0.5, arrow_color='brown', arrow_alpha=0.8, arrow_size=0.3, arrow_color_reciprocal='pink', arrow_alpha_reciprocal=1, arrow_size_reciprocal=0.7, inside_pieces=False, inside_pieces_color='red', inside_pieces_alpha=1, insignificant_limbs=False, insignificant_limbs_color='red', insignificant_limbs_alpha=1, non_soma_touching_meshes=False, non_soma_touching_meshes_color='red', non_soma_touching_meshes_alpha=1, buffer=1000, axis_box_off=True, html_path='', show_at_end=True, append_figure=False, colors_to_omit=[], return_color_dict=False, print_flag=False, print_time=False, flip_y=True, scatters=[], scatters_colors=[], scatter_size=0.3, main_scatter_color='red', soma_border_vertices=False, soma_border_vertices_size=0.3, soma_border_vertices_color='random', verbose=True, subtract_glia=True, zoom_coordinate=None, zoom_radius=None, zoom_radius_xyz=None, total_synapses=False, total_synapses_size=None, limb_branch_synapses=False, limb_branch_synapse_type='synapses', distance_errored_synapses=False, mesh_errored_synapses=False, soma_synapses=False, limb_branch_size=None, distance_errored_size=None, mesh_errored_size=None, soma_size=None)[source]

** tried to optimize for speed but did not find anything that really sped it up** ipv.serialize.performance = 0/1/2 was the only thing I really found but this didn’t help (most of the time is spent on compiling the visualization and not on the python, can see this by turning on print_time=True, which only shows about 2 seconds for runtime but is really 45 seconds for large mesh)

How to plot the spines: nviz.visualize_neuron(uncompressed_neuron,

limb_branch_dict = dict(),

#mesh_spines=True,

mesh_whole_neuron=True, mesh_whole_neuron_alpha = 0.1,

mesh_spines = True, mesh_spines_color = “red”, mesh_spines_alpha = 0.8,

)

Examples: How to do a concept_network graphing: nviz=reload(nviz) returned_color_dict = nviz.visualize_neuron(uncompressed_neuron,

visualize_type=[“network”], network_resolution=”branch”, network_whole_neuron=True, network_whole_neuron_node_size=1, network_whole_neuron_alpha=0.2, network_directional=True,

#network_soma=[“S1”,”S0”], #network_soma_color = [“black”,”red”], limb_branch_dict=dict(L1=[11,15]), network_color=[“pink”,”green”], network_color_alpha=1, node_size = 5, arrow_size = 1, return_color_dict=True)

Cool facts: 1) Can specify the soma names and not just say true so will only do certain somas

Ex: returned_color_dict = nviz.visualize_neuron(uncompressed_neuron,

visualize_type=[“network”], network_resolution=”limb”,

network_soma=[“S0”],

network_soma_color = [“red”,”black”],

limb_branch_dict=dict(L1=[],L2=[]), node_size = 5, return_color_dict=True)

2) Can put “all” for limb_branch_dict or can put “all” for the lists of each branch

3) Can specify the somas you want to graph and their colors by sending lists

Ex 3: How to specifically color just one branch and fill color the rest of limb limb_idx = “L0” ex_limb = uncompressed_neuron.concept_network.nodes[limb_idx][“data”] branch_idx = 3 ex_branch = ex_limb.concept_network.nodes[2][“data”]

nviz.visualize_neuron(double_neuron_processed,

visualize_type=[“mesh”],

limb_branch_dict=dict(L0=”all”),

mesh_color=dict(L1={3:”red”}), mesh_fill_color=”green”

)

neurd.neuron_visualizations.visualize_neuron_axon_dendrite(neuron_obj, visualize_type=['mesh'], axon_color='aqua', dendrite_color='blue', mesh_color_alpha=1, mesh_soma_color='red', mesh_soma_alpha=1, **kwargs)[source]

Purpose: Fast way to visuzlize the axon and dendritic parts of a neuron

neurd.neuron_visualizations.visualize_neuron_axon_merge_errors(neuron_obj, visualize_type=['mesh'], axon_error_color='aqua', mesh_color='black', mesh_color_alpha=1, mesh_soma_color='red', mesh_soma_alpha=1, **kwargs)[source]

Purpose: Fast way to visuzlize the axon and dendritic parts of a neuron

neurd.neuron_visualizations.visualize_neuron_limbs(neuron_obj, limbs_to_plot=None, plot_soma_limb_network=True)[source]
neurd.neuron_visualizations.visualize_neuron_lite(neuron_obj, **kwargs)[source]
neurd.neuron_visualizations.visualize_neuron_path(neuron_obj, limb_idx, path, path_mesh_color='red', path_skeleton_color='red', mesh_fill_color='green', skeleton_fill_color='green', visualize_type=['mesh', 'skeleton'], scatters=[], scatter_color_list=[], scatter_size=0.3, **kwargs)[source]
neurd.neuron_visualizations.visualize_neuron_specific_limb(neuron_obj, limb_idx=None, limb_name=None, mesh_color_alpha=1)[source]
neurd.neuron_visualizations.visualize_subset_neuron_limbs(neuron_obj, limbs_to_plot)[source]

Purpose: Will just plot some of the limbs

neurd.nwb_utils module

Purpose of NWB: to store (mainly in hdf5 files) large scale neuroscience data easily

Data types typically supported: 1) electrophysiology recordings (spikes) 2) behavioral data (movement tracking) 3) optimal imaging (calcium data) 4) experimental metadata

Advantages: 1) hdf5 based - handle lard scale datasets 2) schema for metadata about experiment 3) has PyNWB that helps interact with NWB files

Typical process: 1) create a NWB file object (with metadata) 2) adds a subject object to the file.subject attribute 3) Creates data, stores inside another object, then adds to nwb file object with (add_aquisition)

neurd.nwb_utils.example_nwb_file()[source]

neurd.parameter_utils module

class neurd.parameter_utils.PackageParameters(data=None, filepath=None, shared_data=None)[source]

Bases: object

__init__(data=None, filepath=None, shared_data=None)[source]
property dict
module_attr_map(module_name=None, attr_list=None, module=None, plus_unused=False, **kwargs)[source]
module_attr_map_requested(module, plus_unused=True)[source]
update(other_obj)[source]
class neurd.parameter_utils.Parameters(data=None, filepath=None, **kwargs)[source]

Bases: object

__init__(data=None, filepath=None, **kwargs)[source]
attr_map(attr_list=None, suffixes_to_ignore=None, plus_unused=False, return_used_params=False, return_unused_params=False, verbose=False)[source]

Example: p_obj[‘apical_utils’].attr_map(

[“multi_apical_height”,”candidate_connected_component_radius_apical”]

)

property dict
json_dict()[source]
update(data)[source]
neurd.parameter_utils.add_global_name_to_dict(mydict)[source]
neurd.parameter_utils.category_param_from_module(module, category='no_category', verbose=False)[source]

Purpose: Want to export parameters belonging to a specific category in a module

Psuedocode: 1)

neurd.parameter_utils.clean_modules_dict(data)[source]
neurd.parameter_utils.config_directory()[source]
neurd.parameter_utils.export_csv(parameters_obj, filename='./parameters.csv', **kwargs)[source]
neurd.parameter_utils.export_df(parameters_obj, module_col='module', parameter_col='parameter name', value_col='default value')[source]

Purpose: Want to export a dataframe with the parameter values for different modules

Returns:

  • df (pd.DataFrame) – a dataframe with the following columns: module, parameter_name, value

  • Pseudocode

  • ———-

  • 0) Create a list to store dictionaries

  • 1) Iterate through all modules of parameters object

    1. Get a list of all the parameter names, values

    2. Create a list of dictionaries with names, values and

      add to the list

  • 2) Create the dataframe

neurd.parameter_utils.export_package_param_dict_to_file(package_directory=None, mode='default', clean_dict=False, export_filepath=None, export_folder=None, export_filename=None, return_dict=False)[source]

Purpose: To export the parameters for a certain mode to a file

neurd.parameter_utils.global_param_and_attributes_dict_to_separate_mode_jsons(data, filepath='./', filename='[mode_name]_config.json', filename_mode_placeholder='[mode_name]', indent=None, verbose=False, modes=None)[source]

Purpose: To dump the dictionaries generated from modules into a json format

Pseudocode: For each mode: 1) Get the dictionary 2) convert dictionary into a json file

neurd.parameter_utils.injest_nested_dict(data, filter_away_suffixes=True, suffixes_to_ignore=None, **kwargs)[source]

Purpose: To remove any suffixes from a diction

neurd.parameter_utils.jsonable_dict(data)[source]
neurd.parameter_utils.modes_global_param_and_attributes_dict_all_modules(directory, verbose=False, clean_dict=True)[source]

Purpose: to generate the nested dictionary for all of the modules in the neurd folder

Pseudocode: 1) Load all of the modules in a directory (and get references to them)

  1. For each module: generate the nested dictionary

  2. update the larger dictionary

neurd.parameter_utils.modes_global_param_and_attributes_dict_from_module(module, verbose=False, modes=None, att_types=None, default_name='no_category', add_global_suffix=False, clean_dict=True)[source]

Purpose: To read in parameter and attribute dictionaries, add it to a bigger dictionary and then be able to export the dictionary

neurd.parameter_utils.parameter_config_folder(return_str=True)[source]
neurd.parameter_utils.parameter_dict_from_module_and_obj(module, obj, parameters_obj_name='parameters_obj', plus_unused=False, error_on_no_attr=True, verbose=False)[source]

Purpose: using an object (with a potentially PackageParameters attribute) , a dictionary of attributes to set based on a modules attributes and global parameters

Pseudocode: 1) Get the params to set for module 2) Use the list to get a dictionary from the obj.PackageParameters 3) Find diff between return dict and list 4) Goes and gets different from the attribute of object

neurd.parameter_utils.parameter_list_from_module(module, verbose=False, clean_dict=False, att_types=None, add_global_suffix=True)[source]

Purpose: Know what parameters a modules needs to set

Pseudocode: 1) Get the default dictionary of parameters and attributes 2) export the keys as a list

Ex: from neurd import connectome_utils as conu parameter_list_from_module(

conu, verbose = False

)

neurd.parameter_utils.parameters_from_filepath(filename=None, dict_name='parameters', directory=None, filepath=None, return_dict=False)[source]

Purpose: To import the parameter dictionary from a python file

neurd.parameter_utils.set_parameters_for_directory_modules_from_obj(obj, directory=None, verbose_loop=False, from_package='neurd', parameters_obj_name='parameters_obj', verbose_param=False, error_on_no_attr=False, modules=None)[source]

Purpose: to set attributes of all modules in a directory using an object (with a potentially PackageParameters attribute)

Pseudocode: For all modules in the directory 1) Try importing the module

-> if can’t then skip

2) Use the module and object to get a dictionary of all attributes to set 3) Set the attributes

neurd.parameter_utils.this_directory()[source]

neurd.preprocess_neuron module

neurd.preprocess_neuron.attach_floating_pieces_to_limb_correspondence(limb_correspondence, floating_meshes, floating_piece_face_threshold=None, max_stitch_distance=None, distance_to_move_point_threshold=4000, verbose=False, excluded_node_coordinates=array([], dtype=float64), filter_end_node_length=None, filter_end_node_length_meshparty=1000, use_adaptive_invalidation_d=None, axon_width_preprocess_limb_max=None, limb_remove_mesh_interior_face_threshold=None, error_on_bad_cgal_return=False, max_stitch_distance_CGAL=None, size_threshold_MAP_stitch=None, **kwargs)[source]
neurd.preprocess_neuron.calculate_limb_concept_networks(limb_correspondence, network_starting_info, run_concept_network_checks=True, verbose=False)[source]

Can take a limb correspondence and the starting vertices and endpoints and create a list of concept networks organized by [soma_idx] –> list of concept networks

(because could possibly have mulitple starting points on the same soma)

neurd.preprocess_neuron.check_skeletonization_and_decomp(skeleton, local_correspondence)[source]

Purpose: To check that the decomposition and skeletonization went well

neurd.preprocess_neuron.closest_dist_from_floating_mesh_to_skeleton(skeleton, floating_mesh, verbose=True, plot_floating_mesh=False, plot_closest_coordinate=False)[source]

Purpose: To see what the closest distance for a floating mesh would be for a given skeleton

neurd.preprocess_neuron.correspondence_1_to_1(mesh, local_correspondence, curr_limb_endpoints_must_keep=None, curr_soma_to_piece_touching_vertices=None, must_keep_labels={}, fill_to_soma_border=True, plot=False)[source]

Will Fix the 1-to-1 Correspondence of the mesh correspondence for the limbs and make sure that the endpoints that are designated as touching the soma then make sure the mesh correspondnece reaches the soma limb border

has an optional argument must_keep_labels that will allow you to specify some labels that are a must keep

neurd.preprocess_neuron.filter_limb_correspondence_for_end_nodes(limb_correspondence, mesh, starting_info=None, filter_end_node_length=4000, error_on_no_starting_coordinates=True, plot_new_correspondence=False, error_on_starting_coordinates_not_endnodes=True, verbose=True)[source]

Pseudocode: 1) Get all of the starting coordinates 2) Assemble the entire skeleton and run the skeleton cleaning process 3) Decompose skeleton into branches and find out mappin gof old branches to new ones 4) Assemble the new width and mesh face idx for all new branches - width: do weighted average by skeletal length - face_idx: concatenate 5) Make face_lookup and Run waterfilling algorithm to fill in rest 6) Get the divided meshes and face idx from waterfilling 7) Store everything back inside a correspondence dictionary

neurd.preprocess_neuron.filter_soma_touching_vertices_dict_by_mesh(mesh, curr_piece_to_soma_touching_vertices, verbose=True)[source]

Purpose: Will take the soma to touching vertics and filter it for only those that touch the particular mesh piece

Pseudocode: 1) Build a KDTree of the mesh 2) Create an output dictionary to store the filtered soma touching vertices For the original soma touching vertices, iterating through all the somas

For each soma_touching list:

Query the mesh KDTree and only keep the coordinates whose distance is equal to 0

If empty dictionary then return None? (have option for this)

Ex: return_value = filter_soma_touching_vertices_dict_by_mesh( mesh = mesh_pieces_for_MAP[0], curr_piece_to_soma_touching_vertices = piece_to_soma_touching_vertices[1] )

neurd.preprocess_neuron.find_if_stitch_point_on_end_or_branch(matched_branches_skeletons, stitch_coordinate, verbose=False)[source]
neurd.preprocess_neuron.high_fidelity_axon_decomposition(neuron_obj, plot_new_axon_limb_correspondence=False, plot_connecting_skeleton_fix=False, plot_final_limb_correspondence=False, return_starting_info=True, verbose=True, stitch_floating_axon_pieces=None, filter_away_floating_pieces_inside_soma_bbox=True, soma_bbox_multiplier=2, floating_piece_face_threshold=None, max_stitch_distance=None, plot_new_axon_limb_correspondence_after_stitch=False)[source]

Purpose: To get the decomposition of the axon with the a finer skeletonization (THIS VERSION NOW STITCHES PIECES OF THE AXON)

Returns: a limb correspondence of the revised branches

Pseudocode: 1) Get the starting information for decomposition 2) Split the axon mesh into just one connected mesh (aka filtering away the disconnected parts) 3) Run the limb preprocessing 4) Retriveing starting info from concept network 5) Adjust the axon decomposition to connect to an upstream piece if there was one 6) Return limb correspondence and starting information (IF WE REVISED THE STARTING INFO)

— Add back the floating mesh pieces using the stitching process –

neurd.preprocess_neuron.limb_meshes_expansion(non_soma_touching_meshes, insignificant_limbs, soma_meshes, plot_filtered_pieces=False, non_soma_touching_meshes_face_min=500, insignificant_limbs_face_min=500, plot_distance_G=False, plot_distance_G_thresholded=False, max_distance_threshold=500, min_n_faces_on_path=5000, plot_final_limbs=False, plot_not_added_limbs=False, return_meshes_divided=True, verbose=False)[source]

Purpose: To find the objects that should be made into significant limbs for decomposition (out of the non_soma_touching_meshes and insignificant_limbs )

Pseudocode: 1) Filter the non-soma pieces and insignificant meshes 2) Find distances between all of the significant pieces and form a graph structure 3) Determine the meshes that should be made significant limbs a) find all paths from NST b) filter for those paths with a certain fae total 3) fin all of the nodes right before the soma and the unique set of those will be significant limbs

Ex: floating_piece_face_threshold_expansion = 500 new_limbs_nst,il_still_idx,nst_still_meshes,il_still_meshes = pre.limb_meshes_expansion(

neuron_obj_comb.non_soma_touching_meshes, neuron_obj_comb.insignificant_limbs, neuron_obj_comb[“S0”].mesh,

#Step 1: Filering plot_filtered_pieces = True,

# non_soma_touching_meshes_face_min = floating_piece_face_threshold_expansion, # insignificant_limbs_face_min = floating_piece_face_threshold_expansion,

#Step 2: Distance Graph Structure plot_distance_G = True, plot_distance_G_thresholded = True, max_distance_threshold = 500,

#Step 3: min_n_faces_on_path = 5_000, plot_final_limbs = True, plot_not_added_limbs = True,

verbose = True )

neurd.preprocess_neuron.mesh_correspondence_first_pass(mesh, skeleton=None, skeleton_branches=None, distance_by_mesh_center=True, remove_inside_pieces_threshold=0, skeleton_segment_width=1000, initial_distance_threshold=3000, skeletal_buffer=100, backup_distance_threshold=6000, backup_skeletal_buffer=300, connectivity='edges', plot=False)[source]

Will come up with the mesh correspondences for all of the skeleton branches: where there can be overlaps and empty faces

neurd.preprocess_neuron.plot_correspondence(mesh, correspondence, idx_to_show=None, submesh_from_face_idx=True, verbose=True)[source]

Purpose: Want to plot mesh correspondence first pass

Pseudocode: For each entry: 1) Plot mesh (from idx) 2) plot skeleton

neurd.preprocess_neuron.plot_correspondence_on_single_mesh(mesh, correspondence)[source]

Purpose: To plot the correspondence dict once a 1 to 1 was generated

neurd.preprocess_neuron.preprocess_limb(mesh, soma_touching_vertices_dict=None, distance_by_mesh_center=True, meshparty_segment_size=100, meshparty_n_surface_downsampling=2, combine_close_skeleton_nodes=True, combine_close_skeleton_nodes_threshold=700, filter_end_node_length=None, use_meshafterparty=True, perform_cleaning_checks=True, width_threshold_MAP=None, size_threshold_MAP=None, use_surface_after_CGAL=True, surface_reconstruction_size=None, move_MAP_stitch_to_end_or_branch=True, distance_to_move_point_threshold=500, run_concept_network_checks=True, return_concept_network=True, return_concept_network_starting_info=False, verbose=True, print_fusion_steps=True, check_correspondence_branches=True, filter_end_nodes_from_correspondence=True, error_on_no_starting_coordinates=True, prevent_MP_starter_branch_stitches=False, combine_close_skeleton_nodes_threshold_meshparty=None, filter_end_node_length_meshparty=None, invalidation_d=None, smooth_neighborhood=1, use_adaptive_invalidation_d=None, axon_width_preprocess_limb_max=None, remove_mesh_interior_face_threshold=None, error_on_bad_cgal_return=False, max_stitch_distance_CGAL=None)[source]
neurd.preprocess_neuron.preprocess_neuron(mesh=None, mesh_file=None, segment_id=None, description=None, sig_th_initial_split=100, limb_threshold=2000, apply_expansion=None, floating_piece_face_threshold_expansion=500, max_distance_threshold_expansion=2000, min_n_faces_on_path_expansion=5000, filter_end_node_length=None, decomposition_type='meshafterparty', distance_by_mesh_center=True, meshparty_segment_size=100, meshparty_n_surface_downsampling=2, combine_close_skeleton_nodes=True, combine_close_skeleton_nodes_threshold=700, width_threshold_MAP=None, size_threshold_MAP=None, surface_reconstruction_size=None, floating_piece_face_threshold=None, max_stitch_distance=None, distance_to_move_point_threshold=4000, glia_faces=None, nuclei_faces=None, somas=None, return_no_somas=False, verbose=True, use_adaptive_invalidation_d=None, use_adaptive_invalidation_d_floating=None, axon_width_preprocess_limb_max=None, limb_remove_mesh_interior_face_threshold=None, error_on_bad_cgal_return=False, max_stitch_distance_CGAL=None)[source]

neurd.proofreading_utils module

neurd.proofreading_utils.apply_proofreading_filters_to_neuron(input_neuron, filter_list, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, return_error_info=False, verbose=False, verbose_outline=True, return_limb_branch_dict_to_cancel=False, return_red_blue_splits=False, return_split_locations=False, save_intermediate_neuron_objs=False, combine_path_branches=False)[source]

Purpose: To apply a list of filters to a neuron and collect all of the information on what was filtered away and the remaining neuron after

  • Be able to set certain run arguments that could

help with plotting along the way

Pseudocode: 1) Recieve the input neuron

For each filter: a) print the name of the function, and the arguments to go along b) Run the function with the function argument and run arguments c) print the time it takes d) Print the erro information e) Store the following information - neuron - error skeleton - error area time

  1. Make the output neuron as the new input neuron

# – Adding Optional parameter that allows a filter to recover from an error gracefully – #

neurd.proofreading_utils.axon_on_dendrite_plus_downstream(neuron_obj, plot=False)[source]
neurd.proofreading_utils.calculate_error_rate(total_error_synapse_ids_list, synapse_stats_list, verbose=True)[source]

Calculates all of the synapse erroring stats for the neuron after all the runs

neurd.proofreading_utils.collapse_branches_on_limb(limb_obj, branch_list, plot_new_limb=False, reassign_mesh=True, store_placeholder_for_removed_nodes=True, debug_time=False, verbose=False)[source]

Purpose: To remove 1 or more branches from the concept network of a limb and to adjust the underlying skeleton

** this is more if want to remove the presence of a branch but not remove the mesh associated with it (so just collapsing the node)

Application: To be used in when trying to split a neuron and want to combine nodes that are really close

*** currently does not does not reallocate the mesh part of the nodes that were deleted

Pseudocode:

For each branch to remove 0) Find the branches that were touching the soon to be deleted branch 1) Alter the skeletons of those that were touching that branch

After revised all nodes 2) Remove the current node 3) Generate the limb correspondence, network starting info to generate soma concept networks 4) Create a new limb object and return it

Ex: new_limb_obj = nru.collapse_branches_on_limb(curr_limb,[30,31],plot_new_limb=True,verbose=True)

neurd.proofreading_utils.crossover_elimination_limb_branch_dict(neuron_obj, offset=2500, comparison_distance=None, match_threshold=35, require_two_pairs=True, axon_dependent=True, **kwargs)[source]
neurd.proofreading_utils.cut_limb_network_by_edges(curr_limb, edges_to_delete=None, edges_to_create=None, removed_branches=[], perform_edge_rejection=False, return_accepted_edges_to_create=False, return_copy=True, return_limb_network=False, verbose=False)[source]
neurd.proofreading_utils.cut_limb_network_by_suggestions(curr_limb, suggestions, curr_limb_idx=None, return_copy=True, verbose=False)[source]
neurd.proofreading_utils.delete_branches_from_limb(neuron_obj, branches_to_delete, limb_idx=None, limb_name=None, verbose=False)[source]

Will delete branches from a certain limb

neurd.proofreading_utils.delete_branches_from_neuron(neuron_obj, limb_branch_dict, plot_neuron_after_cancellation=False, plot_final_neuron=False, verbose=False, add_split_to_description=False, **kwargss)[source]

Purpose: To eliminate the error cells and downstream targets given limb branch dict of nodes to eliminate

Pseudocode:

For each limb in branch dict 1) Remove the nodes in the limb branch dict

  1. Send the neuron to i) split_neuron_limbs_by_suggestions ii) split_disconnected_neuron

If a limb is empty or has no more connetion to the starting soma then it will be deleted in the end

neurd.proofreading_utils.doubling_back_and_width_elimination_limb_branch_dict(neuron_obj, skeletal_length_to_skip=5000, comparison_distance=4000, offset=2000, width_jump_threshold=300, width_jump_axon_like_threshold=250, double_back_threshold=140, perform_double_back_errors=True, perform_width_errors=True, skip_double_back_errors_for_axon=True, verbose=False, **kwargs)[source]
neurd.proofreading_utils.edges_to_create_and_delete_by_doubling_back_and_width(limb_obj, verbose=False, **kwargs)[source]

Wrapper for the doubling back and width cuts that will generate edges to delete and create so can fit the edge pipeline

neurd.proofreading_utils.edges_to_create_and_delete_crossover(limb_obj, offset=None, comparison_distance=None, match_threshold=None, axon_dependent=True, require_two_pairs=True, verbose=False)[source]

Purpose: To seperate train track crossovers if there are perfect matches and ignore them if there are not

Pseudocode: 1) Find all 4 degree skeleton coordinates 2) For each coordinate: a) if axon dependnet –> check that all of them are axon (if not then continue) b) do resolve the crossover with the best pairs c) if 2 perfect pairs –> then add the delete and remove edges to the big list

not 2 pairs –> skip

neurd.proofreading_utils.edges_to_create_and_delete_high_degree_coordinates(limb_obj, min_degree_to_find=5, axon_dependent=True, verbose=False)[source]

Purpose: Cut all edges at branches grouped around a high degree skeleton node

Pseudocode: 1) Find Branches Grouped around a high degree skeleton node 2) Get all combinations of the branches 3) Make those combinations the edges to delete, and make edges to create empty

Ex: limb_obj = neuron_obj[0] min_degree_to_find = 5

edges_to_create_and_delete_high_degree_coordinate(limb_obj,

min_degree_to_find = min_degree_to_find, verbose=True)

neurd.proofreading_utils.edges_to_cut_by_doubling_back_and_width_change(limb_obj, skeletal_length_to_skip=5000, comparison_distance=3000, offset=1000, width_jump_threshold=300, width_jump_axon_like_threshold=250, running_width_jump_method=True, double_back_threshold=120, double_back_axon_like_threshold=145, perform_double_back_errors=True, perform_width_errors=True, skip_double_back_errors_for_axon=True, verbose=False, **kwargs)[source]

Purpose: Getting the edges of the concept network to cut for a limb object based on the width and doubling back rules

Application: Will then feed these edges in to cut the limb when automatic proofreading

neurd.proofreading_utils.exc_axon_on_dendrite_merges_filter(**kwargs)[source]
neurd.proofreading_utils.exc_double_back_dendrite_filter(**kwargs)[source]
neurd.proofreading_utils.exc_high_degree_branching_dendrite_filter(catch_error=False, **kwargs)[source]
neurd.proofreading_utils.exc_high_degree_branching_filter(catch_error=False, **kwargs)[source]
neurd.proofreading_utils.exc_low_degree_branching_filter(catch_error=False, **kwargs)[source]
neurd.proofreading_utils.exc_width_jump_up_axon_filter(**kwargs)[source]
neurd.proofreading_utils.exc_width_jump_up_dendrite_filter(**kwargs)[source]
neurd.proofreading_utils.extract_blue_red_points_from_limb_branch_dict_to_cancel(neuron_obj, limb_branch_dict_to_cancel)[source]
neurd.proofreading_utils.extract_from_filter_info(filter_info, name_to_extract='red_blue_suggestions', name_must_be_ending=False)[source]
neurd.proofreading_utils.filter_away_axon_on_dendrite_merges(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_axon_on_dendrite_merges_old(neuron_obj, perform_deepcopy=True, axon_merge_error_limb_branch_dict=None, perform_axon_classification=False, use_pre_existing_axon_labels=False, return_error_info=True, plot_limb_branch_filter_away=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_final_neuron=False, verbose=False, return_limb_branch_dict_to_cancel=False, prevent_errors_on_branches_with_all_postsyn=True, return_limb_branch_before_filter_away=False, **kwargs)[source]

Pseudocode:

If error labels not given 1a) Apply axon classification if requested 1b) Use the pre-existing error labels if requested

  1. Find the total branches that will be removed using the axon-error limb branch dict

  2. Calculate the total skeleton length and error faces area for what will be removed

  3. Delete the brnaches from the neuron

  4. Return the neuron

Example:

filter_away_axon_on_dendrite_merges( neuron_obj = neuron_obj_1, perform_axon_classification = True, return_error_info=True, verbose = True)

neurd.proofreading_utils.filter_away_crossovers(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_dendrite_on_axon_merges(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_dendrite_on_axon_merges_old(neuron_obj, perform_deepcopy=True, limb_branch_dict_for_search=None, use_pre_existing_axon_labels=False, perform_axon_classification=False, dendritic_merge_on_axon_query=None, dendrite_merge_skeletal_length_min=20000, dendrite_merge_width_min=100, dendritie_spine_density_min=0.00015, plot_limb_branch_filter_away=False, plot_limb_branch_filter_with_disconnect_effect=False, return_error_info=False, plot_final_neuron=False, return_limb_branch_dict_to_cancel=False, verbose=False)[source]

Purpose: To filter away the dendrite parts that are merged onto axon pieces

if limb_branch_dict_for_search is None then just going to try and classify the axon and then going to search from there

neurd.proofreading_utils.filter_away_double_back_axon_thick(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_double_back_axon_thin(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_double_back_dendrite(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_high_degree_branching(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_high_degree_branching_dendrite(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_high_degree_coordinates(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_large_double_back_or_width_changes(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_limb_branch_dict(neuron_obj, limb_branch_dict=None, limb_edge_dict=None, plot_limb_branch_filter_away=False, plot_limb_branch_filter_with_disconnect_effect=False, return_error_info=True, plot_final_neuron=False, verbose=False, **kwargs)[source]

Purpose: To filter away a limb branch dict from a single neuron

neurd.proofreading_utils.filter_away_limb_branch_dict_with_function(neuron_obj, limb_branch_dict_function, perform_deepcopy=True, plot_limb_branch_filter_away=False, plot_limb_branch_filter_with_disconnect_effect=False, return_error_info=False, plot_final_neuron=False, print_limb_branch_dict_to_cancel=True, verbose=False, return_limb_branch_dict_to_cancel=False, return_limb_branch_before_filter_away=False, return_created_edges=False, apply_after_removal_to_limb_branch_before=True, **kwargs)[source]

Purpose: To filter away a limb branch dict from a neuron using a function that generates a limb branch dict

neurd.proofreading_utils.filter_away_low_branch_length_clusters(neuron_obj, max_skeletal_length=5000, min_n_nodes_in_cluster=4, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_low_branch_length_clusters_axon(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_low_branch_length_clusters_dendrite(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_low_degree_branching(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_small_axon_fork_divergence(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_thick_t_merge(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_webbing_t_merges(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_width_jump_up_axon(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.filter_away_width_jump_up_dendrite(neuron_obj, return_error_info=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_limb_branch_filter_away=False, plot_final_neuron=False, **kwargs)[source]
neurd.proofreading_utils.find_high_degree_coordinates_on_path(limb_obj, curr_path_to_cut, degree_to_check=4)[source]

Purpose: Find coordinates on a skeleton of the path speciifed (in terms of node ids) that are above the degree_to_check (in reference to the skeleton)

neurd.proofreading_utils.get_all_coordinate_suggestions(suggestions, concatenate=True, voxel_adjustment=True)[source]

Getting all the coordinates where there should be cuts

neurd.proofreading_utils.get_all_cut_and_not_cut_path_coordinates(limb_results, voxel_adjustment=True)[source]

Get all of the coordinates on the paths that will be cut

neurd.proofreading_utils.get_attribute_from_suggestion(suggestions, curr_limb_idx=None, attribute_name='edges_to_delete')[source]
neurd.proofreading_utils.get_best_cut_edge(curr_limb, cut_path, remove_segment_threshold=None, remove_segment_threshold_round_2=None, consider_path_neighbors_for_removal=None, offset_high_degree=None, comparison_distance_high_degree=None, match_threshold_high_degree=None, plot_intermediates=False, skip_small_soma_connectors=None, small_soma_connectors_skeletal_threshold=None, double_back_threshold=None, offset_double_back=None, comparison_distance_double_back=None, width_jump_threshold=None, verbose=False, high_degree_endpoint_coordinates_tried=[], simple_path_of_2_cut=None, apply_double_back_first=None, double_back_threshold_at_first=None, return_split_reasons=False, **kwargs)[source]

Purpose: To choose the best path to cut to disconnect a path based on the heuristic hierarchy of

Cut in descending priority 1) high degree coordinates 2) Doubling Back 3) Width Jump

Pseudocode: 0) Combine close nodes if requested 1) Get any high degree cordinates on path –> if there are then pick the first one and perform the cuts

  1. Check the doubling backs (and pick the highest one if above threshold)

  2. Check for width jumps (and pick the highest one)

  3. Record the cuts that will be made

  4. Make the alterations to the graph (can be adding and creating edges)

neurd.proofreading_utils.get_edges_to_create_from_suggestion(suggestions, curr_limb_idx=None)[source]
neurd.proofreading_utils.get_edges_to_delete_from_suggestion(suggestions, curr_limb_idx=None)[source]
neurd.proofreading_utils.get_exc_filters()[source]
neurd.proofreading_utils.get_exc_filters_high_fidelity_axon_postprocessing_old()[source]
neurd.proofreading_utils.get_exc_filters_high_fidelity_axon_preprocessing()[source]
neurd.proofreading_utils.get_exc_filters_high_fidelity_axon_preprocessing_old()[source]
neurd.proofreading_utils.get_inh_filters()[source]
neurd.proofreading_utils.get_n_paths_cut(limb_results, return_multi_touch_multi_soma=False, verbose=False)[source]

Get all of the coordinates on the paths that will be cut

neurd.proofreading_utils.get_n_paths_not_cut(limb_results)[source]

Get all of the coordinates on the paths that will be cut

neurd.proofreading_utils.get_removed_branches_from_suggestion(suggestions, curr_limb_idx=None)[source]
neurd.proofreading_utils.high_degree_coordinates_elimination_limb_branch_dict(neuron_obj, min_degree_to_find=5, axon_dependent=True, **kwargs)[source]
neurd.proofreading_utils.inh_double_back_dendrite_filter(double_back_threshold=None, **kwargs)[source]
neurd.proofreading_utils.inh_high_degree_branching_dendrite_filter(width_max=None, upstream_width_max=None, catch_error=False, **kwargs)[source]
neurd.proofreading_utils.inh_high_degree_branching_filter(width_max=None, upstream_width_max=None, catch_error=False, **kwargs)[source]
neurd.proofreading_utils.inh_low_degree_branching_filter(width_max=None, upstream_width_max=None, max_degree_to_resolve_absolute=None, filters_to_run=None, catch_error=False, **kwargs)[source]
neurd.proofreading_utils.limb_branch_dict_to_cancel_to_red_blue_groups(neuron_obj, limb_branch_dict_to_cancel, plot_error_graph_before_create_edges=False, plot_error_branches=False, created_edges=None, plot_error_graph_after_create_edges=False, plot_error_connected_components=False, plot_final_blue_red_points=False, scatter_size=0.3, plot_all_blue_red_groups=False, pair_conn_comp_errors=True, verbose=False, return_error_skeleton_points=True, **kwargs)[source]

Purpose: To create groups that should be split using blue and red team and then find the split points

Psuedocode:

For each limb: 0a) Get subgraph of error branches 0b) Add any edges that were created that are between these error branches 1) Find the connected components of error branches 2) For each connected component we will build a red and a blue team

a) find all upstream nodes of error branches THAT AREN’T ERRORS: -> include the error branches hat these upstream valid branches came from and the skeleton point that connectes them

b) Find all valid downstream nodes from te upstream valid ones –. include te skeleton points that connect them

  1. Optional: Choose the downstream error branches of current boundary error branches

At this point: Have the red and blue branches and the connecting points

  1. for each node in the group
    For each endpoint that is included in a boundary

    i) Attempt to restrict the skeleton by X distance from that endoint (if two small then pick other endpoint) ii) Find the closest traingle face to that point on that branch mesh and use that

neurd.proofreading_utils.limb_errors_to_cancel_to_red_blue_group(limb_obj, error_branches, neuron_obj=None, limb_idx=None, plot_error_graph_before_create_edges=False, plot_error_branches=False, created_edges=None, plot_error_graph_after_create_edges=False, plot_error_connected_components=False, include_one_hop_downstream_error_branches=None, one_hop_downstream_error_branches_max_distance=None, offset_distance_for_points_valid=None, offset_distance_for_points_error=None, n_points=None, n_red_points=None, n_blue_points=None, red_blue_points_method=None, plot_final_blue_red_points=False, scatter_size=0.3, pair_conn_comp_by_common_upstream=None, pair_conn_comp_errors=None, group_all_conn_comp_together=None, only_outermost_branches=None, min_error_downstream_length_total=None, verbose=False, valid_upstream_branches_restriction=None, split_red_blue_by_common_upstream=None, use_undirected_graph=None, avoid_one_red_or_blue=None, min_cancel_distance_absolute=None, min_cancel_distance_absolute_all_points=None, add_additional_point_to_no_children_branches=True, return_error_skeleton_points=True, return_synapse_points=True, **kwargs)[source]

Purpose: To lay down red and blue points on a limb given error branches

neurd.proofreading_utils.low_branch_length_large_clusters(neuron_obj, max_skeletal_length=None, min_n_nodes_in_cluster=None, limb_branch_dict_restriction=None, skeletal_distance_from_soma_min=None, plot=False, verbose=False, **kwargs)[source]

Purpose: To identify large clusters of small length branches that usually signifify dendrite that was converted to axon or glia pieces

Ex: from neurd import proofreading_utils as pru _ = pru.low_branch_length_large_clusters_dendrite(neuron_obj,plot = True,

max_skeletal_length = 9000,

min_n_nodes_in_cluster = 20)

neurd.proofreading_utils.low_branch_length_large_clusters_axon(neuron_obj, max_skeletal_length=None, min_n_nodes_in_cluster=None, **kwargs)[source]
neurd.proofreading_utils.low_branch_length_large_clusters_dendrite(neuron_obj, max_skeletal_length=None, min_n_nodes_in_cluster=None, **kwargs)[source]
neurd.proofreading_utils.make_filter_dict(filter_name, filter_function, filter_kwargs=None, catch_error=False)[source]
neurd.proofreading_utils.merge_error_red_blue_suggestions_clean(red_blue_suggestions)[source]
neurd.proofreading_utils.merge_type_to_color(merge_type)[source]
neurd.proofreading_utils.multi_soma_split_suggestions(neuron_obj, verbose=False, max_iterations=100, plot_suggestions=False, plot_intermediates=False, plot_suggestions_scatter_size=0.4, remove_segment_threshold=None, plot_cut_coordinates=False, only_multi_soma_paths=False, default_cut_edge='last', debug=False, output_red_blue_suggestions=True, split_red_blue_by_common_upstream=True, one_hop_downstream_error_branches_max_distance=4000, offset_distance_for_points=3000, n_points=1, plot_final_blue_red_points=False, only_outermost_branches=True, include_removed_branches=False, min_error_downstream_length_total=5000, apply_valid_upstream_branches_restriction=True, debug_red_blue=False, **kwargs)[source]

Purpose: To come up with suggestions for splitting a multi-soma

Pseudocode:

  1. Iterate through all of the limbs that need to be processed

  2. Find the suggested cuts until somas are disconnected or failed

  3. Optional: Visualize the nodes and their disconnections

neurd.proofreading_utils.plot_limb_to_red_blue_groups(neuron_obj, limb_to_red_blue_groups, error_color='red', valid_color='blue', scatter_size=0.1)[source]

Purpose: To plot a picture of all the limb to red blue groups information

neurd.proofreading_utils.print_merge_type_color_map(color_map=None)[source]
neurd.proofreading_utils.proofread_neuron(input_neuron, attempt_to_split_neuron=False, plot_neuron_split_results=False, plot_neuron_before_filtering=False, plot_axon=False, plot_axon_like=False, plot_limb_branch_filter_with_disconnect_effect=True, plot_final_filtered_neuron=False, return_process_info=True, debug_time=True, verbose=True, verbose_outline=True, high_fidelity_axon_on_excitatory=True, inh_exc_class=None, perform_axon_classification=True)[source]

Purpose: To apply all of the proofreading rules to a neuron (or a pre-split neuron) and to return the proofread neuron and all of the error information

Pseudocode: 1) If requested try and split the neuron 2) Put the neuron(s) into a list

For each neuron a) Check that there are not any error limbs or

multiple somas

  1. Run the axon classification

  2. Run the excitatory and inhibitory classification (save results in dict)

  3. Based on cell type–> get the filters going to use

  4. Apply the filters to the neuron –> save the error information

3) If not requested to split neuron, then just return the just the single neuron

Ex:

pru.proofread_neuron(

input_neuron = neuron_obj_original,

attempt_to_split_neuron = True, plot_neuron_split_results = False,

plot_neuron_before_filtering = False,

plot_axon = False, plot_axon_like = False,

# – for the filtering loop plot_limb_branch_filter_with_disconnect_effect = True, plot_final_filtered_neuron = True,

# – for the output – return_process_info = True,

debug_time = True, verbose = False, verbose_outline=True

)

neurd.proofreading_utils.proofread_neuron_class_predetermined(neuron_obj, inh_exc_class, perform_axon_classification=False, plot_limb_branch_filter_with_disconnect_effect=True, high_fidelity_axon_on_excitatory=True, plot_final_filtered_neuron=False, plot_new_axon_limb_correspondence=False, plot_new_limb_object=False, plot_final_revised_axon_branch=False, verbose=False, verbose_outline=True, return_limb_branch_dict_to_cancel=True, filter_list=None, return_red_blue_splits=True, return_split_locations=True, neuron_simplification=True)[source]

Purpose: To apply filtering rules to a neuron that has already been classified

neurd.proofreading_utils.proofread_neuron_full(neuron_obj, cell_type=None, add_valid_synapses=False, validation=False, add_spines=False, add_back_soma_synapses=True, perform_axon_processing=False, return_after_axon_processing=False, plot_head_neck_shaft_synapses=False, plot_soma_synapses=False, proofread_verbose=False, verbose_outline=False, plot_limb_branch_filter_with_disconnect_effect=False, plot_final_filtered_neuron=False, plot_synapses_after_proofread=False, plot_compartments=False, plot_valid_synapses=False, plot_error_synapses=False, return_filtering_info=True, verbose=False, debug_time=False, return_red_blue_splits=True, return_split_locations=True, filter_list=None, add_spine_distances=False, original_mesh=None)[source]

Purpose: To proofread the neuron after it has already been:

  1. cell typed

  2. Found the axon (can be optionally performed)

  3. Synapses have been added (can be optionally performed)

neurd.proofreading_utils.proofreading_table_processing(key, proof_version, axon_version, ver=None, compute_synapse_to_soma_skeletal_distance=True, return_errored_synapses_ids_non_axons=False, validation=False, soma_center_in_nm=False, perform_axon_classification=True, high_fidelity_axon_on_excitatory=True, perform_nucleus_pairing=True, add_synapses_before_filtering=False, verbose=True)[source]

Purpose: To do the proofreading and synapse filtering for the datajoint tables

neurd.proofreading_utils.refine_axon_for_high_fidelity_skeleton(neuron_obj, plot_new_axon_limb_correspondence=False, plot_new_limb_object=False, plot_final_revised_axon_branch=False, verbose=False, **kwargs)[source]

Purpose: To replace the axon branches with a higher fidelity representation within the neuron object (aka replacing all of the branch objects)

** Note: The Neuron should already have axon classification up to this point **

Pseudocode: 0) Get the limb branch dict for the axon (if empty then return) #1) Generate the new limb correspondence for the axon (will pass back the starting info as well) 2) Combine the data with any left over branches that still exist in the limb object

  1. Figure out which starting info to use (previous one or axon one)

  2. Delete all replaced branches

c. Rename the existing branches so not incorporate any of the new names from the correspondence

  1. Save the computed dict of all existing branches

  2. export a limb correspondence for those existing branches

  1. Send all the limb correspondence info to create a limb object

4) Part 4: Computing of all the feautres: a) Add back the computed dict b) Re-compute the median mesh width and add no spines for all the new ones

(have option where can set spines to 0)

  1. Recompoute median mesh no spine

5) Adding new limb a) replace old limb with new one b) Run the function that will go through and fix the limbs

neurd.proofreading_utils.save_off_meshes_skeletons(neuron_obj, save_off_compartments=True, save_off_entire_neuron=True, file_name_ending='', return_file_paths=True, split_index=None, verbose=False)[source]

Purpose: To save off the skeletons and mesh of a neuron and the compartments

neurd.proofreading_utils.soma_connections_from_split_title(title)[source]
neurd.proofreading_utils.soma_names_from_split_title(title, return_idx=False)[source]
neurd.proofreading_utils.split_disconnected_neuron(neuron_obj, plot_seperated_neurons=False, verbose=False, save_original_mesh_idx=True, filter_away_remaining_error_limbs=True, return_errored_limbs_info=True, add_split_to_description=True, copy_all_non_soma_touching=True)[source]

Purpose: If a neuron object has already been disconnected at the limbs, this function will then split the neuron object into a list of multiple neuron objects

Pseudocode: 1) check that there do not exist any error limbs 2) Do the splitting process 3) Visualize results if requested

neurd.proofreading_utils.split_neuron(neuron_obj, limb_results=None, plot_crossover_intermediates=False, plot_neuron_split_results=False, plot_soma_limb_network=False, plot_seperated_neurons=False, verbose=False, filter_away_remaining_error_limbs=True, return_error_info=False, min_skeletal_length_limb=None, **kwargs)[source]

Purpose: To take in a whole neuron that could have any number of somas and then to split it into multiple neuron objects

Pseudocode: 1) Get all of the split suggestions 2) Split all of the limbs that need splitting 3) Once have split the limbs, split the neuron object into mutliple objects

neurd.proofreading_utils.split_neuron_limb_by_seperated_network(neuron_obj, curr_limb_idx, seperate_networks=None, cut_concept_network=None, split_current_concept_network=True, error_on_multile_starting_nodes=True, delete_limb_if_empty=True, verbose=False)[source]

Purpose: To Split a neuron limb up into sepearte limb graphs specific

Arguments: neuron_obj seperated_graphs limb_idx

neurd.proofreading_utils.split_neuron_limbs_by_suggestions(neuron_obj, split_suggestions, plot_soma_limb_network=False, verbose=False)[source]

Purpose:

Will take the suggestions of the splits and split the necessary limbs of the neuron object and return the split neuron

neurd.proofreading_utils.split_success(neuron_obj)[source]
neurd.proofreading_utils.split_suggestions_to_concept_networks(neuron_obj, limb_results, apply_changes_to_limbs=False)[source]

Will take the output of the multi_soma_split suggestions and return the concept network with all fo the cuts applied

neurd.proofreading_utils.split_suggestions_to_concept_networks_old(neuron_obj, limb_results, apply_changes_to_limbs=False)[source]

Will take the output of the multi_soma_split suggestions and return the concept network with all fo the cuts applied

neurd.proofreading_utils.split_type_from_title(title)[source]
neurd.proofreading_utils.synapse_filtering(neuron_obj, split_index, nucleus_id, segment_id=None, return_synapse_filter_info=True, return_synapse_center_data=False, return_error_synapse_ids=True, return_valid_synapse_centers=False, return_errored_synapses_ids_non_axons=False, return_error_table_entries=True, mapping_threshold=500, plot_synapses=False, original_mesh_method=True, original_mesh=None, original_mesh_kdtree=None, valid_faces_on_original_mesh=None, axon_faces_on_original_mesh=None, apply_non_axon_presyn_errors=True, precomputed_synapse_dict=None, validation=False, verbose=False)[source]

Psuedocode: 1) Get the synapses that are presyn or postsyn to segment id (but not both) 2) Build a KDTree of the mesh final

—— For presyn and postsyn (as type): ——————- 2) Restrict the table to when segment id is that type 3) Fetch the synapses and scale the centers 4) Find the distance of synapses to mesh 5) If within distance threshold then consider valid 6) For synapses to keep create a list of dictionaries saving off: synaps_id type (presyn or postsyn) segment_id split_id nucleus_id 7) Save of the stats on how many synapses of that type you started with and how many you finished with 8) Save of synapse centers into valid and error groups

—— End Loop ——————- 9) Compiles all stats on erroring 10) Compile all synapse centers

Return the dictionaries to write and also: - stats - synapse centers

neurd.proofreading_utils.v4_exc_filters()[source]
neurd.proofreading_utils.v5_exc_filters()[source]
neurd.proofreading_utils.v6_exc_filters()[source]
neurd.proofreading_utils.v6_exc_filters_old()[source]
neurd.proofreading_utils.v6_inh_filters()[source]
neurd.proofreading_utils.v7_exc_filters(dendrite_branching_filters=None)[source]
neurd.proofreading_utils.v7_inh_filters(dendrite_branching_filters=None)[source]
neurd.proofreading_utils.valid_synapse_records_to_unique_synapse_df(synapse_records)[source]

To turn the records of the synapses into a dataframe of the unique synapses

Application: For turning the synapse filtering output into a valid dataframe

Ex: pru.valid_synapse_records_to_unique_synapse_df(keys_to_write_without_version)

neurd.proximity_analysis_utils module

neurd.proximity_analysis_utils.add_euclidean_dist_to_prox_df(df, centroid_df=None, in_place=False, add_signed_single_axes_dists=True, add_depth_dist=False)[source]

Purpose: To add the pre and post euclidean distances to an edge dataframe with the proximities

neurd.proximity_analysis_utils.conversion_df(proximity_df, presyn_column='presyn', postsyn_column='postsyn', separate_compartments=True, separate_by_neuron_pairs=False, verbose=False)[source]

Purpose: given a proximity table, will calculate the conversion df ( for potentially different compartments )

neurd.proximity_analysis_utils.conversion_df_from_proximity_df(df, in_place=False, verbose=False, sum_aggregation=True)[source]

Purpose: to turn a table with individual proximity entries into a table with the source and target and the number of synapses and number of proximities

Psuedocode:

neurd.proximity_analysis_utils.conversion_rate(df)[source]
neurd.proximity_analysis_utils.conversion_rate_by_attribute_and_cell_type_pairs(df, hue='e_i_predicted', hue_pre=None, hue_post=None, hue_pre_array=None, hue_post_array=None, attribute='proximity_dist', attribute_n_intervals=10, attribute_intervals=None, restrictions=None, presyn_width_max=100, verbose=False, plot=False)[source]

Purpose: To get the different conversion ratios for different cell types based on the contacts table

neurd.proximity_analysis_utils.example_basal_conversion_rate(df=None, **kwargs)[source]
neurd.proximity_analysis_utils.example_pairwise_postsyn_analysis()[source]
neurd.proximity_analysis_utils.pairwise_presyn_proximity_onto_postsyn(segment_id, split_index=0, plot_postsyn=False, verbose=False, subgraph_type='node', presyn_table_restriction=None, proximity_restrictions=("postsyn_compartment != 'soma'",))[source]

Purpose: To end up with a table that has the following

For every pairwise proximitiy located on the same graph 1) Skeletal/Euclidean Distance 2) Pre 1 seg/split/prox_id, Pre 2 seg/split/prox id(Can use later to match the functional) 3) Pre 1 n_synapses, Pre 2 n_synapses 4) Can find anything else you want about the proximity using prox id

Pseudocode:

– getting the postsyn side – 1) Download the postsyn graph object 3) Create the graphs for all subgraphs and iterate through the graphs creating arrays of:

  1. generate the graph

  2. export the coordinates

  3. add to the graph idx vector and node idx vector

4) Build a KD tree of all the coordinates – Have the data we will match to

– getting the presyn side – 1) Get table of all proximities onto table 2) Filter the table for only certain presyns (like functional matches) 3) Filter the table by any other restrictions (like no soma synapses) 3) Get segment_id,split_index,prox_id and postsyn_x_nm for all proximities 4) Run the KDTree on all of the postsyn_x_nm of the proximities to find the nearest postsyn point 5) Use closest point vector to index to create new vectors for each proximity of

i) graph_idx vector and node_idx vector,coordinate (put these in table)

# – To find pairwise only want to do inside the same graph — For each unique graph (that made it in the graph_idx): 1) Filter the whole df to only that graph (if 1 or less then return) 2) For each node, find the graph distance between between all other node idxs not on row 2b) Find the euclidean distance between node coordinate an all other node coordinates 3) Store the result in the table as: - pre 1 seg/split/prox_id, pre 2 seg/split/prox_id, graph distance, euclidean dist

neurd.proximity_analysis_utils.plot_prox_func_vs_attribute_from_edge_df(edge_df, source='Exc', targets=['Exc', 'Inh'], column='presyn_skeletal_distance_to_soma', divisor=1000, hue='connection_type', percentile_upper=99, percentile_lower=0, func=<function conversion_rate>, bins=None, n_bins=10, equal_depth_bins=True, data_source='H01', axes_fontsize=35, title_fontsize=40, tick_fontsize=30, legend_fontsize=25, title_pad=15, legend_title='Connection Type', linewidth=3, xlabel='Axon Distance to Soma ($\\mu m$)', ylabel='Mean Conversion Rate', title='Conversion Rate vs. Axon\n Distance to Soma', ax=None, figsize=(8, 7), add_scatter=True, scatter_size=100, verbose=False, verbose_bin_df=False, legend_label=None, bins_mid_type='weighted', return_n_dict=False)[source]

Purpose: To plot the a function of the proximity df as a function of an attribute of the proximities (like presyn distance)

neurd.proximity_analysis_utils.print_n_dict(n_dict, category_joiner='\n', verbose=True)[source]

Purpose: To print out the category and n_proximity dict from the optional returned n_dict datastructure from plot_prox_func_vs_attribute_from_edge_df

Pseudocode: 1) Iterate through all keys

  1. get the datapoints (round to 2 decimal places)

  2. Get proximity

  3. Create str for all datapoints as

    xi (n_prox = pi),

  1. Append all category strings

neurd.proximity_analysis_utils.str_of_n_prox_n_syn(df, category_column='category', independent_axis='proximity_dist', prox_column='n_prox', syn_column='n_syn', category_str_joiner='\n', verbose=False)[source]

Purpose: Printing the n_prox and n_syn for each datapoint in a conversion by x plot

Pseudocode: 1) Iterate through each category

  1. Get the proximity dist (round to 2 decimal places)

  2. Get the n_prox

  3. Get the n_syn

  4. cretae a string concatenated for all prox dist of

    prox_dist (n_prox = x, n_syn = y),

  1. Concatenate all category strings with

neurd.proximity_utils module

Notes on proximities: - There are some undercounting of n_synapses in the proximity counting if a lot of synapses because the cancellation distance 5000, but the search for synapses is only 3000 so could have missed some in that cancellation range and search range

–> but the

neurd.proximity_utils.A_prox_from_G_prox(G, **kwargs)[source]
neurd.proximity_utils.A_syn_from_G_prox(G, **kwargs)[source]
neurd.proximity_utils.example_proximity(verbose=True, plot=True, return_df=True)[source]
neurd.proximity_utils.plot_proximity(prox_data, mesh_presyn, mesh_postsyn, prox_no_syn_color='aqua', prox_with_syn_color='green', presyn_mesh_color='red', postsyn_mesh_color='blue', verbose=True)[source]

Purpose: to plot the proximities as syn and not syn

Pseudocode: 1) Divide the proximity list into those with synapse and those without 2) Find coordinates for each group 3) plot

neurd.proximity_utils.postsyn_proximity_data(segment_id, split_index, plot=False, verbose=False, check_starting_coord_match_skeleton=False, neuron_obj=None)[source]

Purpose: Get the postsyn proximity information before pairwise proximities are computed

neurd.proximity_utils.presyn_proximity_data(segment_id, split_index=0, plot=False, verbose=False, neuron_obj=None)[source]

Purpose: Get the presyn proximity information before pairwise proximities are computed

neurd.proximity_utils.proximity_pre_post(segment_id_pre, segment_id_post, split_index_pre=0, split_index_post=0, presyn_prox_data=None, postsyn_prox_data=None, max_proximity_dist=5000, presyn_coordinate_cancel_dist=10000, max_attribute_dist=3000, subtract_width_from_euclidean_dist=True, plot=False, plot_attributes_under_threshold=False, plot_proximities=False, verbose=True, verbose_time=False, return_df=False, verbose_total_time=False)[source]

Purpose: Will compute the proximity dictionaries for a source and target pair of neurons

Pseudocode: 1) Get the presyn information 2) Get the postsyn information 3) Run the contact finding loop and save off the results

Example: pxu.example_proximity()

neurd.proximity_utils.proximity_search_neurons_from_bounding_box(segment_id, split_index, verbose=False, buffer=7000, min_dendrite_skeletal_length=1000000, return_dict=False)[source]
neurd.proximity_utils.proximity_search_neurons_from_database(segment_id, split_index=0)[source]
neurd.proximity_utils.synapse_coordinates_from_df(df)[source]

neurd.soma_extraction_utils module

neurd.soma_extraction_utils.extract_soma_center(segment_id=12345, current_mesh_verts=None, current_mesh_faces=None, mesh=None, outer_decimation_ratio=None, large_mesh_threshold=None, large_mesh_threshold_inner=None, soma_width_threshold=None, soma_size_threshold=None, inner_decimation_ratio=None, segmentation_clusters=3, segmentation_smoothness=0.2, volume_mulitplier=None, side_length_ratio_threshold=None, soma_size_threshold_max=None, delete_files=True, backtrack_soma_mesh_to_original=None, boundary_vertices_threshold=None, poisson_backtrack_distance_threshold=None, close_holes=None, remove_inside_pieces=None, size_threshold_to_remove=None, pymeshfix_clean=None, check_holes_before_pymeshfix=None, second_poisson=None, segmentation_at_end=None, last_size_threshold=None, largest_hole_threshold=None, max_fail_loops=None, perform_pairing=None, verbose=False, return_glia_nuclei_pieces=True, backtrack_soma_size_threshold=None, backtrack_match_distance_threshold=1500, filter_inside_meshes_after_glia_removal=False, max_mesh_sized_filtered_away=90000, filter_inside_somas=True, backtrack_segmentation_on_fail=True, glia_pieces=None, nuclei_pieces=None, glia_volume_threshold_in_um=None, glia_n_faces_threshold=None, glia_n_faces_min=None, nucleus_min=None, nucleus_max=None, second_pass_size_threshold=None, **kwargs)[source]
neurd.soma_extraction_utils.filter_away_inside_soma_pieces(main_mesh_total, pieces_to_test, significance_threshold=2000, n_sample_points=3, required_outside_percentage=0.9, print_flag=False, return_inside_pieces=False)[source]
neurd.soma_extraction_utils.find_soma_centroid_containing_meshes(soma_mesh_list, split_meshes, verbose=False)[source]

Purpose: Will find the mesh piece that most likely has the soma that was found by the poisson soma finding process

neurd.soma_extraction_utils.find_soma_centroids(soma_mesh_list)[source]

Will return a list of soma centers if given one mesh or list of meshes the center is just found by averaging the vertices

neurd.soma_extraction_utils.glia_nuclei_faces_from_mesh(mesh, glia_meshes, nuclei_meshes, return_n_faces=False, verbose=False)[source]

Purpose: To map the glia and nuclei meshes to the

neurd.soma_extraction_utils.grouping_containing_mesh_indices(containing_mesh_indices)[source]
Purpose: To take a dictionary that maps the soma indiece to the

mesh piece containing the indices: {0: 0, 1: 0}

and to rearrange that to a dictionary that maps the mesh piece to a list of all the somas contained inside of it

Pseudocode: 1) get all the unique mesh pieces and create a dictionary with an empty list 2) iterate through the containing_mesh_indices dictionary and add each

soma index to the list of the containing mesh index

  1. check that none of the lists are empty or else something has failed

neurd.soma_extraction_utils.largest_mesh_piece(msh)[source]
neurd.soma_extraction_utils.original_mesh_soma(mesh, original_mesh, bbox_restriction_multiplying_ratio=1.7, match_distance_threshold=1500, mesh_significance_threshold=1000, return_inside_pieces=True, return_multiple_pieces_above_threshold=True, soma_size_threshold=8000, verbose=False)[source]

Purpose: To take an approximation of the soma mesh (usually from a poisson surface reconstruction) and map it to faces on the original mesh

Pseudocode: 1) restrict the larger mesh with a bounding box or current 2) Remove all interior pieces 3) Save the interior pieces if asked for a pass-back 4) Split the Main Mesh 5) Find the Meshes that contain the soma 6) Map to the original with a high distance threshold 7) Split the new mesh and take the largest

neurd.soma_extraction_utils.original_mesh_soma_old(mesh, soma_meshes, sig_th_initial_split=100, subtract_soma_distance_threshold=550, split_meshes=None)[source]

Purpose: Will help backtrack the Poisson surface reconstruction soma to the soma of the actual mesh

Application: By backtracking to mesh it will help with figuring out false somas from neural 3D junk

Ex:

multi_soma_seg_ids = np.unique(multi_soma_seg_ids) seg_id_idx = -2 seg_id = multi_soma_seg_ids[seg_id_idx]

dec_mesh = get_decimated_mesh(seg_id) curr_soma_meshes = get_seg_extracted_somas(seg_id) curr_soma_mesh_list = get_soma_mesh_list(seg_id)

from mesh_tools import skeleton_utils as sk sk.graph_skeleton_and_mesh(main_mesh_verts=dec_mesh.vertices,

main_mesh_faces=dec_mesh.faces,

other_meshes=curr_soma_meshes,

other_meshes_colors=”red”)

soma_meshes_new = original_mesh_soma(

mesh = dec_mesh, soma_meshes=curr_soma_meshes, sig_th_initial_split=15)

neurd.soma_extraction_utils.output_global_parameters_glia(**kwargs)[source]
neurd.soma_extraction_utils.output_global_parameters_nuclei(**kwargs)[source]
neurd.soma_extraction_utils.output_global_parameters_soma(**kwargs)[source]
neurd.soma_extraction_utils.plot_soma_products(mesh, soma_products, verbose=True)[source]
neurd.soma_extraction_utils.remove_nuclei_and_glia_meshes(mesh, glia_volume_threshold_in_um=None, glia_n_faces_threshold=400000, glia_n_faces_min=100000, nucleus_min=None, nucleus_max=None, connectivity='edges', try_hole_close=False, verbose=False, return_glia_nucleus_pieces=True, **kwargs)[source]

Will remove interior faces of a mesh with a certain significant size

Pseudocode: 1) Run the mesh interior 2) Divide all of the interior meshes into glia (those above the threshold) and nuclei (those below) For Glia: 3) Do all of the removal process and get resulting neuron

For nuclei: 4) Do removal process from mesh that already has glia removed (use subtraction with exact_match = False )

neurd.soma_extraction_utils.side_length_check(current_mesh, side_length_ratio_threshold=3)[source]
neurd.soma_extraction_utils.side_length_ratios(current_mesh)[source]

Will compute the ratios of the bounding box sides To be later used to see if there is skewness

neurd.soma_extraction_utils.soma_connectivity = 'edges'

Checking the new validation checks

neurd.soma_extraction_utils.soma_indentification(mesh_decimated, verbose=False, plot=False, **soma_extraction_parameters)[source]
neurd.soma_extraction_utils.soma_volume_check(current_mesh, multiplier=8, verbose=True)[source]
neurd.soma_extraction_utils.soma_volume_ratio(current_mesh, watertight_method='poisson', max_value=1000)[source]

bounding_box_oriented: rotates the box to be less volume bounding_box : does not rotate the box and makes it axis aligned

** checks to see if closed mesh and if not then make closed **

neurd.soma_extraction_utils.subtract_soma(current_soma_list, main_mesh, significance_threshold=200, distance_threshold=1500, connectivity='edges')[source]

neurd.soma_splitting_utils module

neurd.soma_splitting_utils.calculate_multi_soma_split_suggestions(neuron_obj, plot=False, store_in_obj=True, plot_intermediates=False, plot_suggestions=False, plot_cut_coordinates=False, only_multi_soma_paths=False, verbose=False, **kwargs)[source]
neurd.soma_splitting_utils.limb_red_blue_dict_from_red_blue_splits(red_blue_split_results, attributes=('valid_points', 'error_points', 'coordinate'), stack_all_attributes=True)[source]

a dictionary data structure that stores for each limb - valid points: coordinates that should belong to the existing neuronal process ( a marker of where the valid mesh is). - error points: coordinates that should belong to incorrect neuronal process resulting from merge errors ( a marker of where the error mesh starts) - coordinate: locations of split points used in the elimination of soma to soma paths

The valid and error points can be used as inputs for automatic mesh splitting algorithms in other pipelines (ex: Neuroglancer)

neurd.soma_splitting_utils.multi_soma_split_execution(neuron_obj, split_results=None, verbose=False, store_in_obj=True, add_split_index=True)[source]

Purpose: to execute the multi-soma split suggestions on the neuron (if not already generated then generate)

neurd.soma_splitting_utils.path_to_cut_and_coord_dict_from_split_suggestions(split_results, return_total_coordinates=True)[source]
neurd.soma_splitting_utils.plot_red_blue_split_suggestions_per_limb(neuron_obj, red_blue_splits=None, split_results=None, plot_cut_paths=True, plot_red_blue_points=True, plot_skeleton=True, valid_color='blue', error_color='red', coordinate_color='yellow', path_color='green', valid_size=0.3, error_size=0.3, coordinate_size=1.0, path_size=0.3, verbose=True, plot_somas=True, soma_color='orange', **kwargs)[source]

Purpose: to plot the splits for each limb based on the split results

Pseudocode: 1) generate the red iterate through the limbs a. gather the valid points, error points, coordinates b. use plot object to plot the limb

neurd.soma_splitting_utils.red_blue_split_dict_by_limb_from_red_blue_split_results(red_blue_splits)[source]

neurd.spine_utils module

To Do: Want to add how close a spine is to upstream and downstream endpoint

class neurd.spine_utils.Spine(mesh, calculate_spine_attributes=False, branch_obj=None, **kwargs)[source]

Bases: object

Classs that will hold information about a spine extracted from a neuron

mesh_face_idx
Type:

a list of face indices of the branch that belong to the spine mesh

mesh
Type:

the submesh of the branch that represent the spine (mesh_face_idx indexed into the branch mesh

neck_face_idx
Type:

a list of face indices of the spine’s mesh that were classified as the neck (can be empty if not detected)

head_face_idx
Type:

list of face indices of the spine’s mesh that were classified as the head (can be empty if not detected)

neck_sdf
Type:

the sdf value of the neck submesh from the clustering algorithm used to segment the head from the neck

head_sdf
Type:

the sdf value of the head submesh from the clustering algorithm used to segment the head from the neck

head_width
Type:

a width approximation using ray tracing of the head submesh

neck_width
Type:

a width approximation using ray tracing of the head submesh

volume
Type:

volume of entire mesh

spine_id
Type:

unique identifier for spine

sdf
Type:

the sdf value of the spine submesh from the clustering algorithm used to segment the spine from the branch mesh

endpoints_dist
Type:

skeletal walk distance of the skeletal point closest to the start of the spine protrusion to the branch skeletal endpoints

upstream_dist
Type:

skeletal walk distance of the skeletal point closest to the start of the spine protrusion to the upstream branch skeletal endpoint

downstream_dist
Type:

skeletal walk distance of the skeletal point closest to the start of the spine protrusion to the downstream branch skeletal endpoint

coordinate_border_verts
coordinate
Type:

one coordinate of the border vertices to be used for spine locations

bbox_oriented_side_lengths
head_bbox_oriented_side_lengths
neck_bbox_oriented_side_lengths
head_mesh_splits
head_mesh_splits_face_idx
branch_width_overall
branch_skeletal_length
branch_width_at_base
skeleton
Type:

surface skeleton over the spine mesh

skeletal_length
Type:

length of spine skeleton

# -- attributes similar to those of spine attribute
closest_branch_face_idx
closest_sk_coordinate
Type:

3D location in space of closest skeletal point on branch for which spine is located

closest_face_coordinate
Type:

center coordinate of closest mesh face on branch for which spine is located

closest_face_dist
Type:

distance from synapse coordinate to closest_face_coordinate

soma_distance
Type:

skeletal walk distance from synapse to soma

soma_distance_euclidean
Type:

straight path distance from synapse to soma center

compartment
Type:

the compartment of the branch that the spine is located on

limb_idx
Type:

the limb identifier that the spine is located on

branch_idx
Type:

the branch identifier that the spine is located on

__init__(mesh, calculate_spine_attributes=False, branch_obj=None, **kwargs)[source]
property area
property area_of_border_verts
property base_coordinate
property base_coordinate_x_nm
property base_coordinate_y_nm
property base_coordinate_z_nm
property bbox_oriented_side_lengths
property bbox_oriented_side_max
property bbox_oriented_side_middle
property bbox_oriented_side_min
property border_area
calculate_closest_mesh_sk_coordinates(branch_obj, **kwargs)[source]
calculate_face_idx(original_mesh=None, original_mesh_kdtree=None, **kwargs)[source]
calculate_head_neck(**kwargs)[source]
calculate_skeleton()[source]
calculate_spine_attributes(branch_obj=None)[source]
calculate_volume()[source]
property coordinate_border_verts_area
property endpoint_dist_0
property endpoint_dist_1
export(attributes_to_skip=None, attributes_to_add=None, **kwargs)[source]
property head_area
property head_bbox_max_x_nm
property head_bbox_max_y_nm
property head_bbox_max_z_nm
property head_bbox_min_x_nm
property head_bbox_min_y_nm
property head_bbox_min_z_nm
property head_bbox_oriented_side_lengths
property head_bbox_oriented_side_max
property head_bbox_oriented_side_middle
property head_bbox_oriented_side_min
property head_exist
property head_mesh
property head_mesh_splits
property head_mesh_splits_face_idx
head_mesh_splits_from_index(index)[source]
property head_mesh_splits_max
property head_mesh_splits_min
property head_n_faces
property head_skeletal_length
property head_volume
property head_width_ray
property head_width_ray_80_perc
property mesh_center
property mesh_center_x_nm
property mesh_center_y_nm
property mesh_center_z_nm
property n_faces
property n_faces_head
property n_faces_neck
property n_heads
property neck_area
property neck_bbox_max_x_nm
property neck_bbox_max_y_nm
property neck_bbox_max_z_nm
property neck_bbox_min_x_nm
property neck_bbox_min_y_nm
property neck_bbox_min_z_nm
property neck_bbox_oriented_side_lengths
property neck_bbox_oriented_side_max
property neck_bbox_oriented_side_middle
property neck_bbox_oriented_side_min
property neck_mesh
property neck_n_faces
property neck_skeletal_length
property neck_volume
property neck_width_ray
property neck_width_ray_80_perc
property no_head_face_idx
property no_head_mesh
plot_head_neck(**kwargs)[source]
property sdf_70_perc
property sdf_90_perc
property sdf_mean
property sdf_median
property shaft_border_area
property skeletal_length
property skeleton
property spine_area
property spine_bbox_max_x_nm
property spine_bbox_max_y_nm
property spine_bbox_max_z_nm
property spine_bbox_min_x_nm
property spine_bbox_min_y_nm
property spine_bbox_min_z_nm
property spine_bbox_oriented_side_max
property spine_bbox_oriented_side_middle
property spine_bbox_oriented_side_min
property spine_n_faces
property spine_skeletal_length
property spine_volume
property spine_width_ray
property spine_width_ray_80_perc
neurd.spine_utils.add_head_neck_shaft_spine_objs(neuron_obj, add_synapse_labels=True, filter_spines_for_size=True, add_distance_attributes=True, verbose=False)[source]

Will do the additionaly processing that adds the spine objects to a neuron and then creates the head_neck_shaft_idx for the branches

Application: Can be used later to map synapses to the accurate label

neurd.spine_utils.add_spine_densities_to_spine_stats_df(df, head_types=['no_head', 'head'], skeletal_length_divisor=1000, compartments=None, in_place=False)[source]

Purpose: Compute spine densities from spine_stats_df

neurd.spine_utils.add_synapse_densities_to_spine_synapse_stats_df(df, in_place=True, compartments=None, spine_compartments=None, skeletal_length_divisor=1000, eta=1e-06, return_features=False, max_value_to_set_to_zero=100000)[source]

Purpose: Want to compute the synapse densities

neurd.spine_utils.adjust_obj_with_face_offset(spine_obj, face_offset, verbose=False)[source]

Purpose: To adjust the spine properties that would be affected by a different face idx

Ex: b_test = neuron_obj[0][18] sp_obj = b_test.spines_obj[0] sp_obj.export()

spu.adjust_spine_obj_with_face_offset(

sp_obj, face_offset = face_offset, verbose = True

).export()

neurd.spine_utils.apply_sdf_filter(sdf_values, sdf_median_mean_difference_threshold=0.025, return_not_passed=False)[source]
neurd.spine_utils.area_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.area_of_border_verts(spine_obj, default_value=0)[source]
neurd.spine_utils.bbox_max_x_nm_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.bbox_max_y_nm_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.bbox_max_z_nm_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.bbox_min_x_nm_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.bbox_min_y_nm_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.bbox_min_z_nm_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.bouton_non_bouton_idx_from_branch(branch_obj, plot_branch_boutons=False, plot_face_idx=False, verbose=False)[source]

Purpose: To add axon labels to the branch

Ex: return_face_idx = spu.bouton_non_bouton_idx_from_branch(branch_obj = neuron_obj[0][0], plot_branch_boutons = False, verbose = True, plot_face_idx = True, )

neurd.spine_utils.calculate_branch_overall_features(spine_obj, branch_obj, branch_features=None)[source]
neurd.spine_utils.calculate_branch_width_at_base(spine_obj, branch_obj)[source]
neurd.spine_utils.calculate_endpoints_dist(branch_obj, spine_obj)[source]
neurd.spine_utils.calculate_face_idx(spine_obj, original_mesh=None, original_mesh_kdtree=None, **kwargs)[source]

Purpose: To calculate the original faces of the spine to a reference mesh

neurd.spine_utils.calculate_soma_distance_euclidean(spine_obj, soma_center=None)[source]
neurd.spine_utils.calculate_soma_distance_skeletal(spine_obj, upstream_skeletal_length=None)[source]
neurd.spine_utils.calculate_spine_attributes(spine_obj, branch_obj=None, calculate_coordinates=True, calculate_head_neck=False, branch_shaft_mesh_face_idx=None, soma_center=None, upstream_skeletal_length=None, branch_features=None, verbose_time=False, mesh=None, **kwargs)[source]

Purpose

Given a spine mesh (and potentially the branch object it resides on) calculates descriptive statistics

Pseudocode

  1. calculates the volume

  2. calculates the bounding box side lengths

  3. calculates branch relative statistics:
    1. closest mesh/skeleton coordinate

    2. distance of spine base to each skeleton endpoint

    3. width of branch obj at the base of the spine

  4. Optionally calculates the head and neck clusters and statistics. spu.calculate_head_neck

Global Parameters to Set

param spine_obj:

_description_

type spine_obj:

_type_

param branch_obj:

_description_, by default None

type branch_obj:

_type_, optional

param calculate_coordinates:

_description_, by default True

type calculate_coordinates:

bool, optional

param calculate_head_neck:

_description_, by default False

type calculate_head_neck:

bool, optional

param branch_shaft_mesh_face_idx:

_description_, by default None

type branch_shaft_mesh_face_idx:

_type_, optional

param soma_center:

_description_, by default None

type soma_center:

_type_, optional

param upstream_skeletal_length:

_description_, by default None

type upstream_skeletal_length:

_type_, optional

param branch_features:

_description_, by default None

type branch_features:

_type_, optional

param verbose_time:

_description_, by default False

type verbose_time:

bool, optional

param mesh:

_description_, by default None

type mesh:

_type_, optional

returns:

_description_

rtype:

_type_

neurd.spine_utils.calculate_spine_attributes_for_list(spine_objs, branch_obj=None, calculate_coordinates=True, calculate_head_neck=False, verbose_time=False, mesh=None, **kwargs)[source]
neurd.spine_utils.calculate_spine_obj_attr_for_neuron(neuron_obj, verbose=False, create_id=True, **kwargs)[source]

Purpose: To set all of the neuron_obj spine attributes

Pseudocode: for limbs

for branches
  1. calculate the mesh and skeleton info

  2. calculate_branch_attr_soma_distances_on_limb

calculate the soma distance

Ex: neuron_obj= spu.calculate_spine_obj_attr_for_neuron(neuron_obj,verbose = True)

neurd.spine_utils.calculate_spine_obj_mesh_skeleton_coordinates(branch_obj=None, spine_obj=None, coordinate_method='first_coordinate', plot_intersecting_vertices=False, plot_closest_skeleton_coordinate=False, spine_objs=None, branch_shaft_mesh_face_idx=None, verbose=False, verbose_time=False, mesh=None, skeleton=None, **kwargs)[source]

Will compute a lot of the properties of spine objects that are equivalent to those computed in syu.add_valid_synapses_to_neuron_obj

The attributes include

“endpoints_dist”, “upstream_dist”, “downstream_dist”, “coordinate”, “closest_sk_coordinate”, “closest_face_idx”, “closest_branch_face_idx”, “closest_face_dist”, “closest_face_coordinate”,

Pseudocode: 1) Make sure the branches have upstream and downstream set 2) Find intersection of vertices between branch and shaft 3) Find average vertices that make up the coordinate 4) Find the closest mesh coordinate 5) Find the closest skeeleton point

neurd.spine_utils.calculate_spine_obj_mesh_skeleton_coordinates_for_branch(branch_obj)[source]
neurd.spine_utils.calculate_spines_on_branch(branch, clusters_threshold=None, smoothness_threshold=None, shaft_threshold=None, plot_spines_before_filter=False, spine_n_face_threshold=None, spine_sk_length_threshold=None, plot_spines_after_face_threshold=False, filter_by_bounding_box_longest_side_length=None, side_length_threshold=None, plot_spines_after_bbox_threshold=False, filter_out_border_spines=None, border_percentage_threshold=None, check_spine_border_perc=None, plot_spines_after_border_filter=False, skeleton_endpoint_nullification=None, skeleton_endpoint_nullification_distance=None, plot_spines_after_skeleton_endpt_nullification=False, soma_vertex_nullification=None, soma_verts=None, soma_kdtree=None, plot_spines_after_soma_nullification=False, filter_by_volume=None, calculate_spine_volume=None, filter_by_volume_threshold=None, plot_spines_after_volume_filter=False, print_flag=False, plot_segmentation=False, **kwargs)[source]

Purpose

Will calculate the spines on a branch object

Pseudocode

  1. spu.get_spine_meshes_unfiltered_from_mesh

  2. filters spine meshes by minimum number of faces (spine_n_face_threshold) and minimum skeletal length (spine_sk_length_threshold)

3) if requested (filter_by_bounding_box_longest_side_length), filters the meshes to have less than a certain length (side_length_threshold) for the longest side of their oriented bounding box. To prevent false positive spines from long axon fragment merges 4) if requested (filter_out_border_spines), filter out meshes that have:

  1. higher than a certain percentage (border_percentage_threshold) of the submesh vertices overlapping with border vertices (certices adjacent to open spaces in the mesh) on the parent mesh

  2. higher than certain percentage (check_spine_border_perc_global) of the parent mesh’s border vertices overlapping with the submesh vertices

  1. if requested (skeleton_endpoint_nullification), filter away spines that are within a certain distance (skeleton_endpoint_nullification_distance) from the branch skeleton endpoints in order to avoid a high false positive class.

  2. if requested (soma_vertex_nullification), filter out spines that have vertices overlapping with vertices of the soma

  3. Creates a spine object for each of the spine meshes remaining after filtering:
    1. spu.calculate_spine_attributes

Global Parameters to Set

spine_n_face_threshold: int

minimum number of mesh faces for a submesh to be in consideration for the spine classification

# – size filtering spine_sk_length_threshold: int

minimum length (unit) of the surface skeleton of the submesh for it to be in consideration for the spine classification

# – bounding box filtering filter_by_bounding_box_longest_side_length:

side_length_threshold:

# – border filtering filter_out_border_spines: bool

whether to be perform spine filtering by considering how much spine submesh vertices overlap with border vertices of barents

border_percentage_threshold: float

maximum percentage a submesh vertices can overlap with border vertices (certices adjacent to open spaces in the mesh) on the parent mesh and still be in consideration for spine label

check_spine_border_perc_global: float

maximum percentage that a the parent mesh’s border vertices can overlapping with the submesh vertices and that submesh still be in consideration for spine label

# – skeleton filtering skeleton_endpoint_nullification: bool

skeleton_endpoint_nullification_distance_global: float

minimum distance

# – soma filtering soma_vertex_nullification: bool

Ex: curr_limb = neuron_obj[2] soma_verts = np.concatenate([neuron_obj[f”S{k}”].mesh.vertices for k in curr_limb.touching_somas()])

branch = neuron_obj[2][7] sp_filt,sp_vol, spine_submesh_split_filtered_not = calculate_spines_on_branch(

branch,

shaft_threshold = 500, smoothness_threshold = 0.08,

plot_spines_before_filter = False, plot_spines_after_face_threshold=False,

plot_spines_after_bbox_threshold = True, plot_spines_after_border_filter = True,

soma_verts = soma_verts, plot_spines_after_skeleton_endpt_nullification = True, plot_spines_after_soma_nullification = True, plot_spines_after_volume_filter = True,

print_flag=True,

)

neurd.spine_utils.calculate_spines_on_neuron(neuron_obj, limb_branch_dict=None, query=None, plot_query=False, soma_vertex_nullification=None, calculate_spine_volume=None, print_flag=False, limb_branch_dict_exclude=None, **kwargs)[source]

Purpose: Will calculate spines over a neuron object

Pseudocode

  1. Calculates a limb_branch_dict over which to perform spine detection if not already given. Which branches are included are determined by:
    1. Generates width calculations if not already performed

    2. performs the search with the query to get the limb branch dict

  2. Iterates over all limbs in limb branch:
    1. Generates the soma touching vertices and creates a kdtree for them

    2. Iterates over all branches:
      1. spu.calculate_spines_on_branch → returns spines and spine volumes

Global Parameters to Set

query: str

the query to restrict branches searched

Ex: spu.calculate_spines_on_neuron(

recovered_neuron, plot_query = False, print_flag = True,

)

nviz.plot_spines(recovered_neuron)

neurd.spine_utils.calculate_upstream_downstream_dist_from_up_idx(spine_obj, up_idx)[source]
neurd.spine_utils.colors_from_spine_bouton_labels(spine_bouton_labels)[source]
neurd.spine_utils.compartment_idx_for_mesh_face_idx_of_spine(spine_obj)[source]

Purpose: Create a face map for that spines mesh_face_idx from the head,neck, and no_label

Ex: spine_obj = output_spine_objs[5] spu.plot_head_neck(spine_obj) tu.split_mesh_into_face_groups(

spine_obj.mesh, spu.compartment_idx_for_mesh_face_idx_of_spine(spine_obj), plot=True,

)

neurd.spine_utils.compartment_index_from_id(id)[source]
neurd.spine_utils.complete_spine_processing(neuron_obj, compute_initial_spines=True, compute_no_spine_width=True, compute_spine_objs=True, limb_branch_dict_exclude=None, verbose=False, plot=False)[source]

Will redo all of the spine processing

Pseudocode: 1) Redo the spines 2) Redo the spine widthing 3) Redo the spine calculation

Ex: import time spu.set_global_parameters_and_attributes_by_data_type(data_type) spu.complete_spine_processing(

neuron_obj, verbose = True)

neurd.spine_utils.connectivity = 'edges'

DON’T NEED THIS FUNCTION ANYMORE BECAUSE REPLACED BY TRIMESH_UTILS MESH_SEGMENTATION def cgal_segmentation(written_file_location,

clusters=2, smoothness=0.03, return_sdf=True,

print_flag=False, delete_temp_file=True):

if written_file_location[-4:] == “.off”:

cgal_mesh_file = written_file_location[:-4]

else:

cgal_mesh_file = written_file_location

if print_flag:
print(f”Going to run cgal segmentation with:”

f”

File: {cgal_mesh_file} clusters:{clusters} smoothness:{smoothness}”)

csm.cgal_segmentation(cgal_mesh_file,clusters,smoothness)

#read in the csv file cgal_output_file = Path(cgal_mesh_file + “-cgal_” + str(np.round(clusters,2)) + “_” + “{:.2f}”.format(smoothness) + “.csv” ) cgal_output_file_sdf = Path(cgal_mesh_file + “-cgal_” + str(np.round(clusters,2)) + “_” + “{:.2f}”.format(smoothness) + “_sdf.csv” )

cgal_data = np.genfromtxt(str(cgal_output_file.absolute()), delimiter=’

‘)

cgal_sdf_data = np.genfromtxt(str(cgal_output_file_sdf.absolute()), delimiter=’

‘)

if delete_temp_file:

cgal_output_file.unlink() cgal_output_file_sdf.unlink()

if return_sdf:

return cgal_data,cgal_sdf_data

else:

return cgal_data

neurd.spine_utils.convert_number_of_columns_to_dtype(df, dtype='int')[source]
neurd.spine_utils.decode_head_neck_shaft_idx(array)[source]
neurd.spine_utils.df_from_spine_objs(spine_objs, attributes_to_skip=('mesh_face_idx', 'mesh', 'neck_face_idx', 'head_face_idx', 'sdf', 'skeleton'), attributes_to_add=('area', 'sdf_mean', 'n_faces', 'n_faces_head', 'n_faces_neck', 'mesh_center', 'sdf_mean', 'sdf_90_perc', 'sdf_70_perc', 'bbox_oriented_side_max', 'bbox_oriented_side_middle', 'bbox_oriented_side_min', 'n_heads', 'endpoint_dist_0', 'endpoint_dist_1'), columns_at_front=('area', 'sdf_mean', 'n_faces', 'n_faces_head', 'n_faces_neck', 'mesh_center', 'sdf_mean', 'sdf_90_perc', 'sdf_70_perc', 'bbox_oriented_side_max', 'bbox_oriented_side_middle', 'bbox_oriented_side_min', 'n_heads', 'endpoint_dist_0', 'endpoint_dist_1'), columns_at_back=None, attributes=None, add_volume_to_area_ratio=False, verbose=False, verbose_loop=False)[source]

Purpose: make a spine attribute dataframe from a list of spines

Pseudocode: 1)

neurd.spine_utils.examle_plot_spines_from_spine_df_query(spine_df, spine_objs)[source]
neurd.spine_utils.example_comparing_mesh_segmentation_vs_spine_head_segmentation(spine_mesh)[source]
neurd.spine_utils.example_plot_coordinates_from_spine_df_idx(idx, spine_objs)[source]
neurd.spine_utils.example_plot_small_volume_spines_from_spine_df(neuron_obj, spine_df)[source]
neurd.spine_utils.example_syn_df_spine_correlations(syn_df)[source]
neurd.spine_utils.example_trying_to_skeletonize_spine(spine_obj)[source]
neurd.spine_utils.export(spine_obj, attributes_to_skip=None, attributes_to_add=None, suppress_errors=True, attributes=None, default_value=None)[source]
neurd.spine_utils.face_idx_map_from_spine_objs(spine_objs, branch_obj=None, mesh=None, no_spine_index=-1, plot=False)[source]

Purpose: from a branch mesh and spine objs on that branch mesh create an array (N,2) that maps every face to the shaft or a spine index and the compartment

neurd.spine_utils.false_positive_queries(table_type='pandas', invert=False, include_axons=True)[source]
neurd.spine_utils.feature_summed_over_compartments(df, feature, final_name=None, in_place=True, compartments=None, head_prefix=['head', 'no_head'], verbose=False)[source]
neurd.spine_utils.features_to_export_for_db()[source]
neurd.spine_utils.filter_and_scale_spine_syn_df(spine_df, syn_df)[source]

Purpose: To reduce the number of columns of the synapse table and to scale the area and volume values in both the synapse and spine table so in um^2 and um^3

neurd.spine_utils.filter_away_fp_from_df(df, fp_queries=None, verbose=False, eta=1e-06)[source]
neurd.spine_utils.filter_for_high_confidence_df(df, apply_default_restrictions=True, verbose=False)[source]
neurd.spine_utils.filter_out_border_spines(mesh, spine_submeshes, border_percentage_threshold=None, check_spine_border_perc=None, verbose=False, return_idx=False)[source]

Purpose: Filter away spines by their percentage overlap with parent border vertices

neurd.spine_utils.filter_out_soma_touching_spines(spine_submeshes, soma_vertices=None, soma_kdtree=None, verbose=False, return_idx=False)[source]

Purpose: To filter the spines that are touching the somae Because those are generally false positives picked up by cgal segmentation

Pseudocode 1) Create a KDTree from the soma vertices 2) For each spine: a) Do a query against the KDTree with vertices b) If any of the vertices have - distance then nullify

neurd.spine_utils.filter_spine_df_samples(df, syn_max=4, sp_comp_max=3, restrictions=None)[source]
neurd.spine_utils.filter_spine_meshes(spine_meshes, spine_n_face_threshold=None, spine_sk_length_threshold=None, verbose=False)[source]
neurd.spine_utils.filter_spine_objs_by_size_bare_minimum(spine_objs, spine_n_face_threshold: int | None = None, spine_sk_length_threshold: float | None = None, filter_by_volume_threshold: float | None = None, bbox_oriented_side_max_min: int | None = None, sdf_mean_min: float | None = None, spine_volume_to_spine_area_min: float | None = None, verbose=False)[source]

Purpose

Filters the spine objects for minimum feature requirements.

Pseudocode

  1. Apply a list of simple attribute “greater than” queries to filter down the spine objects.

Global Parameters to Set

spine_n_face_threshold_bare_min:

minimum number of faces for a valid spine mesh

spine_sk_length_threshold_bare_min:

minimum surface skeletal length (units) of a valid spine mesh

filter_by_volume_threshold_bare_min:

minimum volume (units) of a valid spine mesh

bbox_oriented_side_max_min_bare_min:

minimum side length (uints) of the oriented bounding box surrounding the spine mesh for valid spines

sdf_mean_min_bare_min:

minimum mean sdf value (computed in the shaft/spine clustering step performed by the cgal clustering algorithm) for valid spine meshes

spine_volume_to_spine_area_min_bare_min:

minimum ratio of mesh volume (units^3) to mesh area (units^2) for valid spine meshes

Notes

  1. Visualize the spines after these filtering

param spine_objs:

_description_

type spine_objs:

_type_

param spine_n_face_threshold:

_description_, by default None

type spine_n_face_threshold:

int, optional

param spine_sk_length_threshold:

_description_, by default None

type spine_sk_length_threshold:

float, optional

param filter_by_volume_threshold:

_description_, by default None

type filter_by_volume_threshold:

float, optional

param bbox_oriented_side_max_min:

_description_, by default None

type bbox_oriented_side_max_min:

int, optional

param sdf_mean_min:

_description_, by default None

type sdf_mean_min:

float, optional

param spine_volume_to_spine_area_min:

_description_, by default None

type spine_volume_to_spine_area_min:

float, optional

param verbose:

_description_, by default False

type verbose:

bool, optional

returns:

_description_

rtype:

_type_

neurd.spine_utils.filter_spine_objs_from_restrictions(spine_objs, restrictions, spine_df=None, verbose=False, return_idx=False, joiner='AND', plot=False, **kwargs)[source]

Purpose: Want to filter the spines with a list of queries

neurd.spine_utils.filter_spines_by_size(neuron_obj, spine_n_face_threshold=None, filter_by_volume_threshold=None, verbose=False, **kwargs)[source]
neurd.spine_utils.filter_spines_by_size_branch(branch_obj, spine_n_face_threshold=None, filter_by_volume_threshold=None, spine_sk_length_threshold=None, verbose=False, assign_back_to_obj=True, calculate_spines_length_on_whole_neuron=True)[source]

Purpose: To filter away any of the spines according to the size thresholds

neurd.spine_utils.get_spine_meshes_unfiltered(current_neuron, limb_idx, branch_idx, clusters=3, smoothness=0.1, cgal_folder=PosixPath('cgal_temp'), delete_temp_file=True, return_sdf=False, print_flag=False, shaft_threshold=300, mesh=None)[source]
neurd.spine_utils.get_spine_meshes_unfiltered_from_mesh(current_mesh, segment_name=None, clusters=None, smoothness=None, shaft_expansion_method='path_to_all_shaft_mesh', cgal_folder=PosixPath('cgal_temp'), delete_temp_file=True, return_sdf=False, return_mesh_idx=False, print_flag=False, shaft_threshold=None, ensure_mesh_conn_comp=False, plot_segmentation=False, plot_shaft=False, plot=False)[source]

Purpose

First initial spine mesh detection on a branch mesh

Pseudocode

Global Parameters to Set

neurd.spine_utils.head_exist(spine_obj)[source]
neurd.spine_utils.head_mesh(spine_obj)[source]
neurd.spine_utils.head_mesh_splits_face_idx_by_index(spine_obj, index)[source]
neurd.spine_utils.head_mesh_splits_from_index(spine_obj, index)[source]
neurd.spine_utils.head_neck_shaft_idx_from_branch(branch_obj, plot_face_idx=False, add_no_head_label=True, verbose=False, process_axon_branches=True)[source]

Purpose: To create an array mapping the mesh face idx of the branch to a label of head/neck/shaft

Pseudocode: 1) Create an array the size of branch mesh initialized to shaft 2) iterate through all of the spine objects of the branch

  1. set all head index to head

  2. set all neck index to neck

Ex: spu.head_neck_shaft_idx_from_branch(branch_obj = neuron_obj_exc_syn[0][6], plot_face_idx = True, verbose = True,)

neurd.spine_utils.id_from_compartment_index(compartment, index=0)[source]
neurd.spine_utils.id_from_idx(limb_idx, branch_idx, spine_idx)[source]
neurd.spine_utils.is_spine_obj(obj)[source]
neurd.spine_utils.limb_branch_dict_to_search_for_spines(neuron_obj, query=None, plot=False, verbose=False)[source]
neurd.spine_utils.mesh_attribute_from_compartment(spine_obj, attribute_func, compartment='head', index=0, shaft_default_value=None, **kwargs)[source]
neurd.spine_utils.mesh_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.mesh_from_name_or_idx(spine_obj, name=None, idx=None, largest_component=False)[source]
neurd.spine_utils.mesh_minus_spine_objs(spine_objs, mesh=None, branch_obj=None, return_idx=False)[source]

Purpose: To get the shaft mesh of a branch given a list of spines

neurd.spine_utils.n_faces(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.n_shaft_synapses(df)[source]
neurd.spine_utils.n_spine_compartment_synapses(df, compartment)[source]
neurd.spine_utils.n_spines(neuron_obj)[source]
neurd.spine_utils.n_spines_head(neuron_obj)[source]
neurd.spine_utils.n_spines_neck(neuron_obj)[source]
neurd.spine_utils.n_spines_no_head(neuron_obj)[source]
neurd.spine_utils.neck_mesh(spine_obj)[source]
neurd.spine_utils.no_head_mesh(spine_obj)[source]
neurd.spine_utils.number_of_columns(df)[source]
neurd.spine_utils.plot_connetion_type_head_vs_spine_size_by_conn_type_kde(df, x='head_volume', y='synapse_size_um', hue='connection_type', ax=None, figsize=(7, 10), title='Syn Size vs Spine Head Volume', title_fontsize=30, xlabel='Spine Head Volume ($\\mu m^3$)', ylabel='Synapse Cleft Volume ($\\mu m^3$)', axes_label_fontsize=30, axes_tick_fontsize=25, palette=None, kde_thresh=0.2, kde_levels=4, hue_options=None, legend_title=None, legend_fontsize=20)[source]
neurd.spine_utils.plot_feature_histograms_with_ct_overlay(df, features=None, max_value=None)[source]
neurd.spine_utils.plot_head_neck(spine_obj, neck_color='gold', head_color='red', no_head_color='black', verbose=True)[source]
neurd.spine_utils.plot_spine_attribute_vs_category_from_spine_df_samples(df, spine_attributes=['spine_n_head_syn', 'spine_n_spine_total_syn'], category='cell_type', legend_dict_map={'spine_n_head_syn': 'Spine Head', 'spine_n_spine_total_syn': 'All Spine'}, title_append='(MICrONS)', title='Cell Type vs. Average Number\n of Syn on Spine', x_label='Average # of Synapses', legend_title='Syn Type', source='MICrONS', ylabel='Postsyn Cell Type', title_fontsize=30, axes_label_fontsize=30, axes_tick_fontsize=25, legend_fontsize=20, set_legend_outside=False, legend_loc='best', **kwargs)[source]

Purpose: To take a sampling of the spine and to plot the average number of synapses on each spine vs the cell type

neurd.spine_utils.plot_spine_coordinates_from_spine_df(mesh, spine_df, coordinate_types=('base_coordinate', 'mesh_center', 'head_bbox_max', 'head_bbox_min', 'neck_bbox_max', 'neck_bbox_min'))[source]

Ex: scats = spu.plot_spine_coordinates_from_spine_df(

mesh = neuron_obj.mesh, spine_df=spine_df,

)

neurd.spine_utils.plot_spine_embeddings_kde(df, embeddings=['umap_0', 'umap_1'], rotation=0, hue='e_i_predicted', excitatory_color=(1.0, 0.4980392156862745, 0.054901960784313725), inhibitory_color=(0.12156862745098039, 0.4666666666666667, 0.7058823529411765), thresh=0.2, levels=5, alpha=0.5, ax=None)[source]
neurd.spine_utils.plot_spine_face_idx_dict(mesh, spine_head_face_idx=None, spine_neck_face_idx=None, spine_no_head_face_idx=None, shaft_face_idx=None, head_color='red', neck_color='gold', no_head_color='black', shaft_color='lime', mesh_alpha=0.5, synapse_dict=None, verbose=True, show_at_end=True, mesh_to_plot=None, compartment_meshes_dict=None, scatters=[], scatters_colors=[], **kwargs)[source]
neurd.spine_utils.plot_spine_feature_hist(df, x, y, x_multiplier=1, y_multiplier=1, title_fontsize=30, axes_label_fontsize=30, axes_tick_fontsize=25, palette=None, hue=None, legend=False, title=None, xlabel=None, ylabel=None, ax=None, figsize=(7, 10), percentile_upper=99, verbose=False, print_correlation=False, show_at_end=False, plot_type='histplot', kde_thresh=0.2, kde_levels=4, min_x=0, min_y=0, text_box_x=0.95, text_box_y=0.05, text_box_horizontalalignment='right', text_box_verticalalignment='bottom', text_box_alpha=1, plot_correlation_box=True, correlation_type='corr_pearson', text_box_fontsize=20, xlim=None, ylim=None, legend_title=None, legend_fontsize=None)[source]
neurd.spine_utils.plot_spine_objs(spine_objs, branch_obj=None, mesh=None, plot_mesh_centers=True, spine_color='random', mesh_alpha=1)[source]
neurd.spine_utils.plot_spine_objs_and_syn_from_syn_df(spine_objs, syn_df, spine_idxs_to_plot=None)[source]
neurd.spine_utils.plot_spine_objs_on_branch(spines_obj, branch_obj, plot_spines_individually=True)[source]
neurd.spine_utils.plot_spine_synapse_coords_dict(mesh=None, synapse_dict=None, spine_head_synapse_coords=None, spine_neck_synapse_coords=None, spine_no_head_synapse_coords=None, shaft_synapse_coords=None, head_color='red', neck_color='gold', no_head_color='black', shaft_color='lime', verbose=False, scatter_size=0.08, **kwargs)[source]
neurd.spine_utils.plot_spines_head_neck(neuron_obj, head_color='red', neck_color='gold', no_head_color='black', bouton_color='orange', mesh_alpha=0.5, verbose=False, show_at_end=True, combine_meshes=True)[source]
neurd.spine_utils.plot_spines_objs_with_head_neck_and_coordinates(spine_objs, branch_obj=None, mesh=None, head_color='red', neck_color='aqua', no_head_color='black', base_coordinate_color='pink', center_coordinate_color='orange', mesh_alpha=0.8, verbose=False)[source]

Purpose: Want to plot from a list of spines all of the head,necks, centroids and coordinates of spines

Pseudocode: For each spine: a) get the head/neck and put into mesh lists b) Get the coordinates and mesh_center and put into scatter 3) Plot all with the branch mesh

neurd.spine_utils.print_filter_spine_thresholds()[source]
neurd.spine_utils.query_spine_objs(spine_objs, restrictions, spine_df=None, verbose=False, return_idx=False, joiner='AND', plot=False, **kwargs)

Purpose: Want to filter the spines with a list of queries

neurd.spine_utils.restrict_meshes_to_shaft_meshes_without_coordinates(meshes, close_hole_area_top_2_mean_max=None, mesh_volume_max=None, n_faces_min=None, verbose=False, plot=False, return_idx=True, return_all_shaft_if_none=True)[source]

Purpose

To restrict a list of meshes to those with a high probability of being a shaft mesh

Pseudocode

  1. restricts meshes to those greater than a certain volume (shaft_mesh_volume_max) or greater than a certain mean top 2 hole area (shaft_close_hole_area_top_2_mean_max) because both could be indicative of a shaft mesh

    functions used: trimesh_utils.close_hole_area_top_2_mean, trimesh_utils.mesh_volume

  2. restricts meshes to face threshold (shaft_mesh_n_faces_min)

  3. Returns the indices of the meshes remaining after filtering

Global Parameters to Set

shaft_close_hole_area_top_2_mean_max: float

a minimum mesh area (nm^2) threshold for the mean of the top 2 holes (ideally the connecting ends of the tube like shaft sections from the mesh cluster (from the CGAL algorithm) to be potentially considered a part of the neuron shaft. (The max refers to a max suffix is in reference to spines)

shaft_mesh_volume_max: int

the minimum mesh volume (nm^3) for a mesh cluster (from the CGAL algorithm) to be potentially considered a part of the neuron shaft. (The max refers to a max suffix is in reference to spines)

shaft_mesh_n_faces_min: int

A minimum number of faces on a submesh cluster early in the pipeline to be potentially considered a part of the neuron shaft.

neurd.spine_utils.scale_stats_df(df, scale_dict={'area': 1e-06, 'volume': 1e-09}, in_place=False)[source]

Purpose: Want to scale certain columns of dataframe by divisors if have keyword

neurd.spine_utils.sdf_median_mean_difference(sdf_values)[source]
neurd.spine_utils.seg_split_spine(df=None, segment_id=None, split_index=None, spine_id=None, return_dicts=True)[source]

Purpose: Get segment,split_index and spine_ids

Ex: spu.seg_split_spine(spine_df_trans_umap)

spu.seg_split_spine(

segment_id = 864691135730167737, split_index = 0, spine_id = [0,11], df = None,

)

neurd.spine_utils.seg_split_spine_from_df(df=None, segment_id=None, split_index=None, spine_id=None, return_dicts=True)

Purpose: Get segment,split_index and spine_ids

Ex: spu.seg_split_spine(spine_df_trans_umap)

spu.seg_split_spine(

segment_id = 864691135730167737, split_index = 0, spine_id = [0,11], df = None,

)

neurd.spine_utils.set_branch_head_neck_shaft_idx(branch_obj, plot_face_idx=False, add_no_head_label=True, verbose=False)[source]
neurd.spine_utils.set_branch_spines_obj(branch_obj, calculate_mesh_face_idx=True, verbose=False)[source]

Purpose: To set the spine 0bj attribute for a branch

Pseudocode: 1) store the spine mesh and the volume 2) calculate the neck face idx and sdf 2) optional: calculate the mesh face_idx

neurd.spine_utils.set_branch_synapses_head_neck_shaft(branch_obj, verbose=False)[source]

Purpose: To use the head_neck_shaft_idx of the branch objects to give the synapses of a branch the head_neck_shaft label

Pseudocode: If the branch has any 1) Build a KDTree of the branch mesh 2) find which faces are the closest for the coordinates of all the synapses 3) Assign the closest face and the head_neck_shaft to the synapse objects

neurd.spine_utils.set_neuron_head_neck_shaft_idx(neuron_obj, add_no_head_label=True, verbose=False)[source]
neurd.spine_utils.set_neuron_spine_attribute(neuron_obj, func, verbose=False)[source]

Purpose: To set the spines obj for all branches in the neuron obj

neurd.spine_utils.set_neuron_spines_obj(neuron_obj, verbose=False)[source]
neurd.spine_utils.set_neuron_synapses_head_neck_shaft(neuron_obj, verbose=False)[source]
neurd.spine_utils.set_shaft_synapses_from_spine_query(df, query, verbose=False, in_place=False)[source]

Purpose: Receiving a synapse table with spine information associated with it and filters of which spines to actually keep, will flip the current spine compartment label of the synapses

Pseudocode: 1) invert the filter to get a filter for all spines that should not be spines 2) Use the query to set the spine compartment as shaft

neurd.spine_utils.set_soma_synapses_spine_label(neuron_obj, soma_spine_label='no_label')[source]
neurd.spine_utils.shaft_synapses(df)[source]
neurd.spine_utils.skeletal_length_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.skeletal_length_from_spine(spine, plot=False)[source]
neurd.spine_utils.skeleton_from_spine(spine, plot=False)[source]
neurd.spine_utils.spine_and_syn_df_computed_from_neuron_obj(neuron_obj, limb_branch_dict=None, restrict_to_proofread_filtered_branches=True, decimated_mesh=None, proofread_faces=None, return_spine_compartmenets_face_idxs=True, add_neuron_compartments=True, compartment_faces_dict=None, verbose=False)[source]

Purpose: To get the spine df and synase df corresponding to these spines from segment id and split index,

dataframes can then be written to datajoint

Pseudocode: 1) Download the neuron object 2) Get the typical limb branch to search 3) Get the limb branch dict for faces after proofreading and the mesh face idx for the branches 4) Generate the new spine objects and corresponding synapse df 5) Generate the spine statistics dataframe from spine objects 6) If the spine_face_idx is requested: - generate the spine face idxs from the limb_branch_face_dict and spine objs

neurd.spine_utils.spine_bouton_labels_to_plot()[source]
neurd.spine_utils.spine_compartment_mesh_functions(compartments=('spine', 'head', 'neck'), stats_functions=('width_ray_from_compartment', 'width_ray_80_perc_from_compartment', 'area_from_compartment', 'volume_from_compartment', 'skeletal_length_from_compartment', 'n_faces', 'bbox_min_x_nm_from_compartment', 'bbox_min_y_nm_from_compartment', 'bbox_min_z_nm_from_compartment', 'bbox_max_x_nm_from_compartment', 'bbox_max_y_nm_from_compartment', 'bbox_max_z_nm_from_compartment'), verbose=True)[source]

Purpose: To generate the size functions for all compartments: spine,head,neck,

Pseudocode: 1) Iterate through all compartments 2) print out the formatted function

neurd.spine_utils.spine_compartment_mesh_functions_dict(spine_obj)[source]

Purpose: To compute the statistics for a spine obj

spu.spine_compartment_mesh_functions_dict(spine_obj) Output: {‘spine_width’: 262.76843376430315,

‘spine_width_80_perc’: 433.6109345474601, ‘spine_area’: 4458314.872099194, ‘spine_volume’: 1442177510.8208666, ‘spine_skeletal_length’: 4518.759601841594, ‘head_width’: 385.4437561195757, ‘head_width_80_perc’: 452.5942823041014, ‘head_area’: 1848904.3472918314, ‘head_volume’: 198657361.45833808, ‘head_skeletal_length’: 1580.07824623045, ‘neck_width’: 110.51452991062567, ‘neck_width_80_perc’: 145.0012250999968, ‘neck_area’: 1378146.4731119499, ‘neck_volume’: 258763454.06805038, ‘neck_skeletal_length’: 2600.629272237006

}

neurd.spine_utils.spine_compartment_synapses(df, compartment)[source]
neurd.spine_utils.spine_compartments_face_idx_for_neuron_mesh(limb_branch_spine_dict, limb_branch_face_dict, compartments=('head', 'neck', 'no_head'), compute_shaft_face_idx=True, mesh=None, n_faces=None, plot=False, mesh_alpha=1, add_n_faces=True)[source]

Purpose: To compile the face_idx of a compartment from spine objs knowing the branch_face_idx corresponding to the larger mesh

Example: spine_compartment_masks = spu.spine_compartments_face_idx_for_neuron_mesh(

limb_branch_face_dict=limb_branch_face_dict, limb_branch_spine_dict = limb_branch_spine_info_ret, mesh = decimated_mesh, plot = True, mesh_alpha = 1

)

neurd.spine_utils.spine_counts_from_spine_df(spine_df)[source]
neurd.spine_utils.spine_density(obj, um=True)[source]

n_spine / skeletal length (um)

neurd.spine_utils.spine_density_over_limb_branch(neuron_obj, limb_branch_dict, synapse_type='synapses', multiplier=1, verbose=False, return_skeletal_length=False)[source]

Purpose: To calculate the spine density over lmb branch

Application: To be used for cell type (E/I) classification

Pseudocode: 1) Restrict the neuron branches to be processed for spine density 2) Calculate the skeletal length over the limb branch 3) Find the number of spines over limb branch 4) Compute postsynaptic density

Ex:

neurd.spine_utils.spine_df_for_db_from_spine_objs(spine_objs, verbose=False, verbose_loop=False)[source]
neurd.spine_utils.spine_features_to_print()[source]
neurd.spine_utils.spine_head_neck(mesh, cluster_options=(2, 3, 4), smoothness=None, plot_segmentation=False, head_ray_trace_min=None, head_face_min=None, default_head_face_idx=array([], dtype=int64), default_head_sdf=-1, stop_segmentation_after_first_success=False, no_head_coordinates=None, only_allow_one_connected_component_neck=None, plot_head_neck=False, return_meshes=False, return_sdf=True, return_width=True, verbose=False)[source]

Purpose

To determine the head and neck face indices clustering of a mesh representing a spine

Pseudocode

for clusters [2,3]: 1) Run the segmentation algorithm (using cluster and smoothness thresholds) 2) Filter meshes for all those with a value above the face count above threshold and above the ray trace percentage width value 3) Store the meshes not face as neck and store sdf value as weighted average 4a) If none then continue 4b) If at least one, concatenate the faces of all of the spine heads into one array (and do weighted average of the sdf) 5) Break if found a head 6) Optionally plot the spine neck and head

Global Parameters to Set

head_smoothness:

the cgal segmentation smoothness parameter for clustering a spine mesh into a head and neck

head_ray_trace_min: float

minimum width approximation (units) as a percentile of the ray trace values for a submesh to be in consideration as a spine head submesh

head_face_min: int

minimum number of faces for a submesh to be in consideration as a spine head submesh

only_allow_one_connected_component_neck: bool

whether to allow for the neck submesh to be multiple connected components (aka disconnected) or not

Can optionally return: 1) Meshes instead of face idx

Ex: curr_idx = 38 sp_mesh = curr_branch.spines[curr_idx] spu.spine_head_neck(sp_mesh,

cluster_options=(2,3,4), smoothness=0.15, verbose = True, plot_segmentation=True,

plot_head_neck=True)

neurd.spine_utils.spine_id_add_from_limb_branch_idx(limb_idx, branch_idx)[source]
neurd.spine_utils.spine_id_from_limb_branch_spine_idx(limb_idx, branch_idx, spine_idx=0)[source]

Purpose: Defines the method used for creating the spine id [limb,2][branches,4][spine,4]

neurd.spine_utils.spine_id_range_from_limb_branch_idx(limb_idx, branch_idx, verbose=False, return_dict=False, **kwargs)[source]

Purpose: to come up with a spine id range given a limb and branch

Pseudocode:

neurd.spine_utils.spine_int_label(spine_label)[source]
neurd.spine_utils.spine_labels(include_no_label=False)[source]
neurd.spine_utils.spine_length(spine_mesh, verbose=False, surface_skeleton_method='slower', plot=False)[source]
neurd.spine_utils.spine_mesh(spine_obj)[source]
neurd.spine_utils.spine_objs_and_synapse_df_computed_from_branch_idx(branch_obj=None, limb_obj=None, branch_idx=None, soma_verts_on_limb=None, soma_kdtree_on_limb=None, upstream_skeletal_length=None, plot_branch_mesh_before_spine_detection=False, plot_unfiltered_spines=False, plot_filtered_spines=False, branch_features=None, verbose=False, verbose_computation=False, **kwargs)[source]

Purpose: from a branch object will generate spine objects and the synapse df of synaspes onto spines

neurd.spine_utils.spine_objs_and_synapse_df_computed_from_neuron_obj(neuron_obj, limb_branch_dict=None, limb_branch_dict_exclude=None, verbose=False, **kwargs)[source]
neurd.spine_utils.spine_objs_and_synapse_df_total_from_limb_branch_spine_dict(limb_branch_spine_dict, verbose=False)[source]

Purpose: To extract all spine objects and synapse dfs and concatenate from a limb_branch_spine_dict

neurd.spine_utils.spine_objs_bare_minimum_filt_with_attr_from_branch_obj(branch_obj=None, soma_verts_on_limb=None, soma_kdtree_on_limb=None, plot_unfiltered_spines=False, plot_filtered_spines=False, verbose=False, soma_center=None, upstream_skeletal_length=None, branch_features=None, mesh=None, skeleton=None, **kwargs)[source]

Purpose

Performs spine detection on a branch object or a branch mesh (optionally with a skeleton)

Purpose Detailed

Pseudocode

  1. Generates spine objects

  2. Calculate spine attributes for each spine object

  3. Apply bare minimum spine attribute filtering (filter_spine_objs_by_size_bare_minimum)

Analysis Roadmap

spu.spine_objs_bare_minimum_filt_with_attr_from_branch_obj

spu.spine_objs_with_border_sk_endpoint_and_soma_filter_from_scratch_on_branch_obj

spu.get_spine_meshes_unfiltered_from_mesh
spu.get_spine_meshes_unfiltered_from_mesh
spu.split_mesh_into_spines_shaft:
tu.mesh_segmentation

VISUALIZATION: visualization that enough chopped up PARAMETER CHANGE:

smoothness_threshold clusters_threshold

spu.restrict_meshes_to_shaft_meshes_without_coordinates

VISUALIZATION: look at initial shaft seaparation (c) PARAMETER CHANGE:

shaft_close_hole_area_top_2_mean_max shaft_mesh_volume_max shaft_mesh_n_faces_min

VISUALIZATION: individual spines prior to individual spine filtering

VISUALIZATION: spines after filtering (or each step after filtering) PARAMETER CHANGE:

# – border filtering filter_out_border_spines border_percentage_threshold # – skeleton filtering skeleton_endpoint_nullification skeleton_endpoint_nullification_distance # – soma filtering soma_vertex_nullification: bool

filter_spine_objs_by_size_bare_minimum

VISUALIZATION: spines before and after substitution PARAMETER CHANGE:

spine_n_face_threshold_bare_min spine_sk_length_threshold_bare_min filter_by_volume_threshold_bare_min bbox_oriented_side_max_min_bare_min sdf_mean_min_bare_min spine_volume_to_spine_area_min_bare_min

spu.calculate_spine_attributes_for_list
spu.calculate_spine_attributes:
spu.calculate_head_neck:

VISUALIZATION: spine head/neck subdivision PARAMETER CHANGE:

head_smoothness head_ray_trace_min head_face_min only_allow_one_connected_component_neck

neurd.spine_utils.spine_objs_near_endpoints(spine_objs, min_dist=4000, plot=False)[source]
neurd.spine_utils.spine_objs_with_border_sk_endpoint_and_soma_filter_from_scratch_on_branch_obj(branch_obj=None, plot_segmentation=False, ensure_mesh_conn_comp=True, plot_spines_before_filter=False, filter_out_border_spines=None, border_percentage_threshold=None, plot_spines_after_border_filter=False, skeleton_endpoint_nullification=False, skeleton_endpoint_nullification_distance=None, plot_spines_after_skeleton_endpt_nullification=False, soma_vertex_nullification=None, soma_verts=None, soma_kdtree=None, plot_spines_after_soma_nullification=None, plot=False, verbose=False, mesh=None, skeleton=None, **kwargs)[source]

Purpose

Performs spine detection on a branch object or a branch mesh (optionally with a skeleton) and then apply filtering before creating official Spine objects from each individually detected spine.

Purpose Detailed

Pseudocode

  1. Generate initial spine mesh detection

  2. Filter out border spines:
    if requested (filter_out_border_spines), filter out meshes that have:
    1. higher than a certain percentage (border_percentage_threshold) of the submesh vertices overlapping with border vertices (certices adjacent to open spaces in the mesh) on the parent mesh

    2. higher than certain percentage (check_spine_border_perc_global) of the parent mesh’s border vertices overlapping with the submesh vertices

  3. skeleton endpoint filtering:

    if requested (skeleton_endpoint_nullification), filter away spines that are within a certain distance (skeleton_endpoint_nullification_distance) from the branch skeleton endpoints in order to avoid a high false positive class.

  4. some vertex nullification:

    if requested (soma_vertex_nullification), filter out spines that have vertices overlapping with vertices of the soma

  5. Creates spine objects from all the remaining spine objects (will do head/neck segmentaiton)

Global Parameters to Set

# – border filtering filter_out_border_spines: bool

whether to be perform spine filtering by considering how much spine submesh vertices overlap with border vertices of shaft submesh

border_percentage_threshold: float

maximum percentage a submesh vertices can overlap with border vertices (vertices adjacent to open spaces in the mesh) on the parent mesh and still be in consideration for spine label

# – skeleton filtering skeleton_endpoint_nullification: bool

whether to filter away spines that are within a certain distance (skeleton_endpoint_nullification_distance) from the branch skeleton endpoints in order to avoid a high false positive class.

skeleton_endpoint_nullification_distance_global: float

minimum distance a spine mesh can be from the branch skeleton endpoints and not be filtered away when the skeleton_endpoint_nullification flag is set

# – soma filtering soma_vertex_nullification: bool

when true will filter out spines that have vertices overlapping with vertices of the soma

neurd.spine_utils.spine_objs_with_border_sk_endpoint_and_soma_filter_from_scratch_on_mesh(mesh, skeleton=None, **kwargs)[source]
neurd.spine_utils.spine_stats_from_spine_df(df, grouping_features=('compartment',), features_no_head=('spine_area', 'spine_n_faces', 'spine_skeletal_length', 'spine_volume', 'spine_width_ray'), features_head_neck=('head_area', 'head_n_faces', 'head_skeletal_length', 'head_volume', 'head_width_ray', 'head_width_ray_80_perc', 'neck_area', 'neck_n_faces', 'neck_skeletal_length', 'neck_volume', 'neck_width_ray', 'neck_width_ray_80_perc'), features_n_syn_no_head=('spine_n_no_head_syn', 'spine_max_no_head_syn_size', 'spine_max_no_head_sp_vol'), features_n_syn_head_neck=('spine_n_head_syn', 'spine_n_neck_syn', 'spine_max_head_syn_size', 'spine_max_neck_syn_size', 'spine_max_head_sp_vol', 'spine_max_neck_sp_vol'), prefix='sp', return_dict=True)[source]

Purpose: to export spine statistics grouped by compartment

neurd.spine_utils.spine_str_label(-2)[source]
neurd.spine_utils.spine_synapse_stats_from_synapse_df(df, grouping_features=('compartment', 'spine_compartment'), grouping_features_backup=('spine_compartment',), features=('spine_area', 'spine_n_faces', 'spine_skeletal_length', 'spine_volume', 'spine_width_ray', 'syn_spine_area', 'syn_spine_volume', 'syn_spine_width_ray_80_perc', 'synapse_size'), prefix='syn', return_dict=True, synapse_types=('postsyn',))[source]

Purpose: To generate a dictionary/df parsing down the categories of a spine df into the averages and counts for different - neuron compartments - spine compartments

Pseudocode: - limit to only postsyn

  • For specified features

1) groupby spine compartment and compartment - reduce by average - reduce by count

neurd.spine_utils.spine_table_restriction_high_confidence(table_type='pandas', include_default_restrictions=True, return_query_str=False)[source]
neurd.spine_utils.spine_volume_density(obj, um=True)[source]

sum spine volume (um**3) / skeletal length (um)

neurd.spine_utils.spine_volume_to_spine_area(spine_obj)[source]
neurd.spine_utils.spines(neuron_obj)[source]
neurd.spine_utils.spines_head(neuron_obj)[source]
neurd.spine_utils.spines_head_meshes(obj)[source]
neurd.spine_utils.spines_neck(neuron_obj)[source]
neurd.spine_utils.spines_neck_meshes(obj)[source]
neurd.spine_utils.spines_no_head(neuron_obj)[source]
neurd.spine_utils.spines_no_head_meshes(obj)[source]
neurd.spine_utils.split_head_mesh(spine_obj, return_face_idx_map=True, plot=False)[source]

Purpose: want to divide mesh into connected components and optionally return the mask of mapping fae to component

neurd.spine_utils.split_mesh_into_spines_shaft(current_mesh, segment_name='', clusters=None, smoothness=None, cgal_folder=PosixPath('cgal_temp'), delete_temp_file=True, shaft_threshold=None, return_sdf=True, print_flag=False, plot_segmentation=False, plot_shaft=False, plot_shaft_buffer=0, **kwargs)[source]

Purpose

Determine the final classification of shaft submeshes and then return all connected components (floating islands) of the submesh after the shaft has been removed as individual spine meshes. In other words, generates the meshes for each individual spine (prior to individual spine filtering steps).

Pseudocode

1) runs a segmentation on the mesh with given clusters and smoothness. a) generates a mapping of the clusters the sdf values for each face 2) splits the segmentation into separate meshes 3) spu.restrict_meshes_to_shaft_meshes_without_coordinates 4) Divide the meshes into spine meshes and shaft meshes (first pass) 5) Given first pass of the shaft and spine classification, reclassifies some spine meshes as shaft meshes to ensure complete graph connectivity between all shaft submeshes using the following algorithm

  1. start with biggest shaft

  2. Find the shortest paths to all shaft parts

  3. add all the submeshes that aren’t already in the shaft category to the shaft category

  1. Creates a total spine submesh by removing all of the shaft submeshes from the original mesh, divides up this total spine submesh into connected components (separate islands) as separate meshes so each mesh represents a single spine

  2. Sorts the spine meshes from largest (number of faces) to smallest

  3. Returns spine meshes and the sdf values (generated in the segmentation step) associated with them

Global Parameters to Set

smoothness_threshold:

The smoothness parameter for the cgal mesh segmentation algorithm used as an initial intermediate over-segmentation step in the spine detection. The smaller the smoothness value, generally the more number of underlying clusters on the mesh identified and the more spines that can be potentially identified at the higher risk of false positives (although the spine algorithm attempts to filter away false positives from the underlying mesh segmentation). Source: https://doc.cgal.org/4.6/Surface_mesh_segmentation/index.html

clusters_threshold:

The clusters parameter for the cgal mesh segmentation algorithm used as an initial intermediate over-segmentation step in the spine detection. The larger the number of clusters the more spines can be potentially identified at the higher risk of false positives (although the spine algorithm attempts to filter away false positives from the underlying mesh segmentation). Source: https://doc.cgal.org/4.6/Surface_mesh_segmentation/index.html

param current_mesh:

_description_

type current_mesh:

_type_

param segment_name:

_description_, by default “”

type segment_name:

str, optional

param clusters:

_description_, by default None

type clusters:

_type_, optional

param smoothness:

_description_, by default None

type smoothness:

_type_, optional

param cgal_folder:

_description_, by default Path(“./cgal_temp”)

type cgal_folder:

_type_, optional

param delete_temp_file:

_description_, by default True

type delete_temp_file:

bool, optional

param shaft_threshold:

_description_, by default None

type shaft_threshold:

_type_, optional

param return_sdf:

_description_, by default True

type return_sdf:

bool, optional

param print_flag:

_description_, by default False

type print_flag:

bool, optional

param plot_segmentation:

_description_, by default False

type plot_segmentation:

bool, optional

param plot_shaft:

_description_, by default False

type plot_shaft:

bool, optional

param plot_shaft_buffer:

_description_, by default 0

type plot_shaft_buffer:

int, optional

returns:

_description_

rtype:

_type_

neurd.spine_utils.split_mesh_into_spines_shaft_old(current_mesh, segment_name='', clusters=None, smoothness=None, cgal_folder=PosixPath('cgal_temp'), delete_temp_file=True, shaft_threshold=None, return_sdf=True, print_flag=True, plot_segmentation=False, **kwargs)[source]
if not cgal_folder.exists():

cgal_folder.mkdir(parents=True,exist_ok=False)

file_to_write = cgal_folder / Path(f”segment_{segment_name}.off”)

# ——- 1/14 Additon: Going to make sure mesh has no degenerate faces — # if filter_away_degenerate_faces:

mesh_to_segment,faces_kept = tu.connected_nondegenerate_mesh(current_mesh,

return_kept_faces_idx=True, return_removed_faces_idx=False)

written_file_location = tu.write_neuron_off(mesh_to_segment,file_to_write)

else:

written_file_location = tu.write_neuron_off(current_mesh,file_to_write)

cgal_data_pre_filt,cgal_sdf_data_pre_filt = cgal_segmentation(written_file_location,

clusters, smoothness, return_sdf=True,

delete_temp_file=delete_temp_file)

if filter_away_degenerate_faces:

cgal_data = np.ones(len(current_mesh.faces))*(np.max(cgal_data_pre_filt)+1) cgal_data[faces_kept] = cgal_data_pre_filt

cgal_sdf_data = np.zeros(len(current_mesh.faces)) cgal_sdf_data[faces_kept] = cgal_sdf_data_pre_filt

else:

cgal_data = cgal_data_pre_filt cgal_sdf_data = cgal_sdf_data_pre_filt

#print(f”file_to_write = {file_to_write.absolute()}”) if delete_temp_file:

#print(“attempting to delete file”) file_to_write.unlink()

neurd.spine_utils.surface_area_to_volume(current_mesh)[source]

Method to try and differentiate false from true spines conclusion: didn’t work

Even when dividing by the number of faces

neurd.spine_utils.synapse_attribute_dict_from_synapse_df(df, attribute='synapse_coords', suffix=None, verbose=False)[source]

Purpose: To extract the coordinates of all the spine categories from a synapse_spine_df

Psuedocode: Iterate through all of the spine categories 1) Restrict the dataframe to just categories 2) Extract coordinates 3) Put in dictionary

neurd.spine_utils.synapse_coords_from_synapse_df(df, suffix=None, verbose=False, return_dict=True)[source]
neurd.spine_utils.synapse_df_with_spine_match(branch_obj, spine_objs, plot_face_idx_map=False, attributes_to_append=('volume', 'width_ray_80_perc', 'area'), attribute_rename_dict={'volume': 'spine_volume'}, spine_id_column='spine_id', spine_compartment_column='spine_compartment', verbose=False)[source]

Purpose: Create a dataframe that maps all syanpses on the branch to the spine id and size of the spine id

Pseudocode: 1) Creae an array the same size as # of faces of branch where the values are (spine_id, comp #) 2) Use the synapse objects of branch

neurd.spine_utils.synapse_ids_from_synapse_df(df, suffix=None, verbose=False, return_dict=True)[source]
neurd.spine_utils.synapse_spine_match_df_filtering(syn_df)[source]

Purpose: To map columns and filter away columns of synapse df for database write

neurd.spine_utils.update_spines_obj(neuron_obj)[source]

Will update all of the spine objects in a neuron

neurd.spine_utils.volume_from_compartment(spine_obj, compartment='head', index=0)[source]
neurd.spine_utils.volume_from_spine(spine, default_value=0)[source]
neurd.spine_utils.width_ray_80_perc_from_compartment(spine_obj, compartment='head', index=0, default_value_if_empty=0)[source]
neurd.spine_utils.width_ray_from_compartment(spine_obj, compartment='head', index=0, percentile=50, default_value_if_empty=0)[source]

neurd.synapse_utils module

How to adjust the features on synapses based on the closest skeleton point

# to calculate the closest skeleton point syn_coord_sk = sk.closest_skeleton_coordinate(curr_branch.skeleton,

face_coord)

#after have closest skeleton coordinate syu.calculate_endpoints_dist() syu.calculate_upstream_downstream_dist_from_down_idx(syn,down_idx)

class neurd.synapse_utils.Synapse(synapse_obj=None, **kwargs)[source]

Bases: object

Classs that will hold information about the synapses that will be attributes of a neuron object

synapse_id
synapse volume
upstream_dist
Type:

skeletal distance from the closest upstream branch point

downstream_dist
Type:

skeletal distance from the closest downstream branch point or endpoint

coordinate
Type:

3D location in space:

closest_sk_coordinate
Type:

3D location in space of closest skeletal point on branch for which synapse is located

closest_face_coordinate
Type:

center coordinate of closest mesh face on branch for which synapse is located

closest_face_dist
Type:

distance from synapse coordinate to closest_face_coordinate

soma_distance
Type:

skeletal walk distance from synapse to soma

soma_distance_euclidean
Type:

straight path distance from synapse to soma center

head_neck_shaft
Type:

whether the synapse is located on a spine head, spine neck or neurite shaft (decoding of integer label is in spine_utils)

compartment
Type:

the compartment of the branch that the synapse is located on

limb_idx
Type:

the limb identifier that the synapse is located on

branch_idx
Type:

the branch identifier that the synapse is located on

Note
Type:

features like head_neck_shaft, compartment are not populated until later stages (cell typing, autoproofreading) when that information is available for the branches

__init__(synapse_obj=None, **kwargs)[source]
export()[source]
neurd.synapse_utils.add_error_synapses_to_neuron_obj(neuron_obj, synapse_dict=None, mesh_label_dict=None, validation=False, verbose=False, original_mesh=None)[source]

Pseudocode: 0) Get the coordinates and volumes of each For each error type a) Create a list for storage For presyn/postsyn:

  1. Build the synapses from the information

  2. store in the list

neurd.synapse_utils.add_nm_to_synapse_df(df, scaling)[source]
neurd.synapse_utils.add_synapses_to_neuron_obj(neuron_obj, segment_id=None, validation=False, verbose=False, original_mesh=None, plot_valid_error_synapses=False, calculate_synapse_soma_distance=False, add_valid_synapses=True, add_error_synapses=True, limb_branch_dict_to_add_synapses=None, **kwargs)[source]

Purpose: To add the synapse information to the neuron object

Pseudocode: 0) Get the KDTree of the original mesh 1) Get

neurd.synapse_utils.add_valid_soma_synapses_to_neuron_obj(neuron_obj, verbose=False, validation=False, **kwargs)[source]
neurd.synapse_utils.add_valid_synapses_to_neuron_obj(neuron_obj, synapse_dict=None, mesh_label_dict=None, validation=False, verbose=False, debug_time=True, calualate_endpoints_dist=True, limb_branch_dict_to_add_synapses=None, original_mesh=None, add_only_soma_synapses=False, **kwargs)[source]

Purpose: To add valid synapses to a neuron object

neurd.synapse_utils.adjust_obj_with_face_offset(synapse_obj, face_offset, attributes_not_to_adjust=('closest_face_idx',), verbose=False)[source]

Purpose: To adjust the spine properties that would be affected by a different face idx

Ex: b_test = neuron_obj[0][18] sp_obj = b_test.spines_obj[0] sp_obj.export()

spu.adjust_spine_obj_with_face_offset(

sp_obj, face_offset = face_offset, verbose = True

).export()

neurd.synapse_utils.annotate_synapse_df(df, add_compartment_coarse_fine=False, decode_head_neck_shaft_idx=False)[source]
neurd.synapse_utils.append_synapses_to_plot(neuron_obj, total_synapses=False, total_synapses_size=0.3, limb_branch_dict='all', limb_branch_synapses=False, limb_branch_size=0.3, limb_branch_synapse_type='synapses', distance_errored_synapses=False, distance_errored_size=0.3, mesh_errored_synapses=False, mesh_errored_size=0.3, soma_synapses=False, soma_size=0.3, return_plottable=False, append_figure=True, show_at_end=False, verbose=False)[source]

Purpose: To add synapse scatter plots to an existing plot

neurd.synapse_utils.axon_ais_synapses(neuron_obj, max_ais_distance_from_soma=None, plot=False, verbose=False, return_synapses=False, **kwargs)[source]

Purpose: to get the postsyns synapses of those on the axon within a certain distance of the soma (so ideally the ais)

Ex:

neurd.synapse_utils.axon_on_dendrite_synapses(neuron_obj, plot_limb_branch=False, verbose=False)[source]
neurd.synapse_utils.axon_synapses(neuron_obj)[source]
neurd.synapse_utils.calculate_endpoints_dist(branch_obj, syn)[source]

Purpose: Will calculate the endpoint distance for a synapse

neurd.synapse_utils.calculate_limb_synapse_soma_distances(limb_obj, calculate_endpoints_dist_if_empty=False, verbose=False)[source]

Purpose: To store the distances to the soma for all of the synapses

Computing the upstream soma distance for each branch 1) calculate the upstream distance 2) Calcualte the upstream endpoint

For each synapse: 3) Soma distance = endpoint_dist

Ex: calculate_limb_synapse_soma_distances(limb_obj = neuron_obj[2], verbose = True)

neurd.synapse_utils.calculate_neuron_soma_distance(neuron_obj, verbose=False, store_soma_placeholder=True, store_error_placeholder=True)[source]

Purpose: To calculate all of the soma distances for all the valid synapses on limbs

Ex: calculate_neuron_soma_distance(neuron_obj,

verbose = True)

neurd.synapse_utils.calculate_neuron_soma_distance_euclidean(neuron_obj, verbose=False, store_soma_placeholder=True, store_error_placeholder=True)[source]

Purpose: To calculate all of the soma distances for all the valid synapses on limbs

Ex: calculate_neuron_soma_distance(neuron_obj,

verbose = True)

neurd.synapse_utils.calculate_upstream_downstream_dist(limb_obj, branch_idx, syn)[source]
neurd.synapse_utils.calculate_upstream_downstream_dist_from_down_idx(syn, down_idx)[source]
neurd.synapse_utils.calculate_upstream_downstream_dist_from_up_idx(syn, up_idx)[source]
neurd.synapse_utils.combine_synapse_dict_into_presyn_postsyn_valid_error_dict(synapse_dict, verbose=False)[source]

Purpose: To concatenate all of the valid and error synapses into one synapse dict (application: which can eventually be plotted)

Pseudocode: 1) iterate through presyn,postsyn

  1. iterate through error valid Find all the keys that have the following in the name Concatenate the lists Store

neurd.synapse_utils.compartment_groups_from_synapses_obj(synapses_obj, compartments=None, verbose=False)[source]
neurd.synapse_utils.complete_n_synapses_analysis(neuron_obj, include_axon_ais_syn=True)[source]
neurd.synapse_utils.downstream_dist_max_over_syn(branch_obj, verbose=False, **kwargs)[source]
neurd.synapse_utils.downstream_dist_min_over_syn(branch_obj, verbose=False, **kwargs)[source]
neurd.synapse_utils.endpoint_dist_extrema_over_syn(branch_obj, endpoint_type='downstream', extrema_type='max', default_dist=100000000, verbose=False, **kwargs)[source]
neurd.synapse_utils.error_query()[source]
neurd.synapse_utils.error_synapses_to_scatter_info(neuron_obj, error_synapses_names=None, pre_color='black', post_color='orange', color_mapping={'distance_errored_synapses_post': 'pink', 'distance_errored_synapses_pre': 'tan', 'mesh_errored_synapses_post': 'lime', 'mesh_errored_synapses_pre': 'brown'}, scatter_size=0.3)[source]

To turn the error synapses into plottable scatters

Ex: syu.error_synapses_to_scatter_info(neuron_obj)

neurd.synapse_utils.export(synapse_obj)[source]
neurd.synapse_utils.exports_to_synapses(exports)[source]
neurd.synapse_utils.fetch_synapse_dict_by_mesh_labels(segment_id, mesh, synapse_dict=None, original_mesh=None, original_mesh_kd=None, validation=False, verbose=False, original_mesh_method=True, mapping_threshold=500, plot_synapses=False, plot_synapses_type=None, **kwargs)[source]

Purpose: To return a synapse dictionary mapping

type (presyn/postsyn)—> mesh_label –> list of synapse ids

for a segment id based on which original mesh face the synapses map to

Pseudocode: 1) get synapses for the segment id Iterate through presyn and postsyn

  1. Find the errors because of distance

  2. Find the errors because of mesh cancellation

c. Find the valid mesh (by set difference) store all error types in output dict

  1. Plot the synapses if requested

Example: mesh_label_dict = syu.fetch_synapse_dict_by_mesh_labels(

segment_id=o_neuron.segment_id, mesh = nru.neuron_mesh_from_branches(o_neuron), original_mesh = du.fetch_segment_id_mesh(segment_id), validation=True, plot_synapses=True, verbose = True)

neurd.synapse_utils.get_errored_synapses_names(neuron_obj)[source]
neurd.synapse_utils.get_synapse_types()[source]
neurd.synapse_utils.get_synapses_compartment(neuron_obj, compartments, verbose=False)[source]

Purpose: will get synapses of a certain compartment if synapses are labeled

Ex: synapses = syu.get_synapses_compartment(o_neuron,

compartments=[“apical”,”oblique”], verbose = True)

neurd.synapse_utils.limb_branch_synapses_to_scatter_info(neuron_obj, limb_branch_dict='all', pre_color='yellow', post_color='blue', scatter_size=0.3, synapse_type='synapses')[source]

Purpose: To make the synapses on the limb and branches plottable

Ex: limb_branch_synapses_to_scatter_info(neuron_obj)

neurd.synapse_utils.limb_branch_with_synapses(neuron_obj, min_n_synapses=1, synapse_type='synapses')[source]
neurd.synapse_utils.n_axon_ais_synapses(neuron_obj, plot=False, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_presyns_on_dendrite(neuron_obj)[source]
neurd.synapse_utils.n_synapses(neuron_obj)[source]
neurd.synapse_utils.n_synapses_all_compartment_spine_type(neuron_obj, compartment_labels=None, spine_labels=None, syn_types=None, verbose=False, return_synapse_objs=False, add_n_syn_in_keys=True, **kwargs)[source]

Purpose: To get all combinations of compartments, spine labels and synapse types that should be computed

neurd.synapse_utils.n_synapses_all_compartments(neuron_obj, compartment_labels=None, verbose=False, return_synapse_objs=False, **kwargs)[source]

Purpose: To get all combinations of compartments, spine labels and synapse types that should be computed

neurd.synapse_utils.n_synapses_all_spine_labels(neuron_obj, compartment_labels=None, verbose=False, return_synapse_objs=False, **kwargs)[source]

Purpose: To get all combinations of compartments, spine labels and synapse types that should be computed

neurd.synapse_utils.n_synapses_all_valid_error(neuron_obj)[source]
neurd.synapse_utils.n_synapses_analysis_axon_dendrite(neuron_obj, verbose=True, include_axon_ais_syn=True, include_soma_syn=True)[source]

Puporse: calculating synapses

neurd.synapse_utils.n_synapses_by_compartment_spine_type(neuron_obj, compartment_label=None, spine_label=None, syn_type=None, plot_synapses=False, synapse_size=0.2, verbose=False, return_title=False, add_n_syn_to_title=True, **kwargs)[source]

Return the number of synapses of a certain type

neurd.synapse_utils.n_synapses_distance_errored(neuron_obj)[source]
neurd.synapse_utils.n_synapses_downstream(limb_obj, branch_idx, verbose=False)[source]
neurd.synapse_utils.n_synapses_error(neuron_obj, error_synapses_names=None, presyns_on_dendrite_as_errors=True, verbose=False)[source]
neurd.synapse_utils.n_synapses_error_post(neuron_obj)[source]
neurd.synapse_utils.n_synapses_error_pre(neuron_obj)[source]
neurd.synapse_utils.n_synapses_head(neuron_obj)[source]
neurd.synapse_utils.n_synapses_mesh_errored(neuron_obj)[source]
neurd.synapse_utils.n_synapses_neck(neuron_obj)[source]
neurd.synapse_utils.n_synapses_no_head(neuron_obj)[source]
neurd.synapse_utils.n_synapses_offset_distance_of_endpoint_downstream(branch_obj, distance, synapse_type='synapses', verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_offset_distance_of_endpoint_upstream(branch_obj, distance, synapse_type='synapses', verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_offset_distance_of_endpoint_upstream_downstream(branch_obj, direction, distance, synapse_type='synapses', verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_over_limb_branch_dict(neuron_obj, limb_branch_dict, synapse_type='synapses')[source]

To gather all of the synapses over a limb branch dict restriction

Ex: syu.synapses_over_limb_branch_dict(neuron_obj,

limb_branch_dict=dict(L2=[5,6,7]), synapse_type = “synapses”)

neurd.synapse_utils.n_synapses_post(neuron_obj)[source]
neurd.synapse_utils.n_synapses_post_downstream(limb_obj, branch_idx, verbose=False)[source]
neurd.synapse_utils.n_synapses_post_head(neuron_obj)[source]
neurd.synapse_utils.n_synapses_post_neck(neuron_obj)[source]
neurd.synapse_utils.n_synapses_post_no_head(neuron_obj)[source]
neurd.synapse_utils.n_synapses_post_offset_distance_of_endpoint_downstream(branch_obj, distance=10000, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_post_offset_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_post_over_limb_branch_dict(neuron_obj, limb_branch_dict)[source]

To gather all of the synapses over a limb branch dict restriction

Ex: syu.synapses_over_limb_branch_dict(neuron_obj,

limb_branch_dict=dict(L2=[5,6,7]), synapse_type = “synapses”)

neurd.synapse_utils.n_synapses_post_shaft(neuron_obj)[source]
neurd.synapse_utils.n_synapses_post_spine(neuron_obj)[source]
neurd.synapse_utils.n_synapses_post_within_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_post_within_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_pre(neuron_obj)[source]
neurd.synapse_utils.n_synapses_pre_downstream(limb_obj, branch_idx, verbose=False)[source]
neurd.synapse_utils.n_synapses_pre_offset_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_pre_offset_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_pre_over_limb_branch_dict(neuron_obj, limb_branch_dict)[source]

To gather all of the synapses over a limb branch dict restriction

Ex: syu.synapses_over_limb_branch_dict(neuron_obj,

limb_branch_dict=dict(L2=[5,6,7]), synapse_type = “synapses”)

neurd.synapse_utils.n_synapses_pre_shaft(neuron_obj)[source]
neurd.synapse_utils.n_synapses_pre_within_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_pre_within_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_shaft(neuron_obj)[source]
neurd.synapse_utils.n_synapses_somas_postsyn(neuron_obj, **kwargs)[source]
neurd.synapse_utils.n_synapses_spine(neuron_obj)[source]
neurd.synapse_utils.n_synapses_spine_offset_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_spine_offset_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_spine_within_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_valid(neuron_obj, include_soma=True)[source]
neurd.synapse_utils.n_synapses_valid_post(neuron_obj)[source]
neurd.synapse_utils.n_synapses_valid_pre(neuron_obj)[source]
neurd.synapse_utils.n_synapses_within_distance_of_endpoint_downstream(branch_obj, distance, synapse_type='synapses', verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_within_distance_of_endpoint_upstream(branch_obj, distance, synapse_type='synapses', verbose=False, **kwargs)[source]
neurd.synapse_utils.n_synapses_within_distance_of_endpoint_upstream_downstream(branch_obj, direction, distance, synapse_type='synapses', verbose=False, **kwargs)[source]
neurd.synapse_utils.plot_head_neck_shaft_synapses(neuron_obj, plot_with_spines=True, head_color='yellow', neck_color='blue', shaft_color='black', no_head_color='purple', bouton_color='pink', non_bouton_color='brown', synapse_size=0.15, verbose=False, **kwargs)[source]

Purpose: To plot all of the head neck and shaft spines of a neuron

Ex: syu.plot_head_neck_shaft_synapses(neuron_obj_exc_syn_sp,

synapse_size=0.08)

neurd.synapse_utils.plot_synapses(neuron_obj, synapse_type='synapses', total_synapses=False, limb_branch_size=0.3, distance_errored_size=0.3, mesh_errored_size=0.3, soma_size=0.3, **kwargs)[source]

The synapse types

neurd.synapse_utils.plot_synapses_compartment(neuron_obj, compartments, synapse_size=1, verbose=False)[source]

Purpose: To plot all synapses of a certain compartment

Ex: synapses = syu.get_synapses_compartment(o_neuron,

compartments=[“apical”,”oblique”], verbose = True)

neurd.synapse_utils.plot_synapses_error_from_neuron_obj(neuron_obj, synapse_size=0.3, verbose=True)[source]
neurd.synapse_utils.plot_synapses_objs(neuron_obj, synapses, plot_with_spines=False, synapse_color='red', synapse_size=0.15, **kwargs)[source]

Purpose: To plot a certain group of synapses on top of the neuron object

Ex: syu.plot_synapse_objs(neuron_obj_exc_syn_sp,

synapses = syu.synapses_shaft(neuron_obj_exc_syn_sp),

synapse_color=”yellow”

)

neurd.synapse_utils.plot_synapses_on_limb(neuron_obj, limb_idx, limb_branch_synapse_type='synapses', **kwargs)[source]
neurd.synapse_utils.plot_synapses_presyn_dendrite_errors(neuron_obj, verbose=False)[source]
neurd.synapse_utils.plot_synapses_query(neuron_obj, query, synapse_size=1, verbose=False)[source]

Purpose: To plot the synapses from a query

Ex: syu.plot_synapses_query(neuron_obj,

query = “syn_type==’presyn’”)

neurd.synapse_utils.plot_synapses_soma(neuron_obj, synapse_size=3, verbose=False)[source]

Ex: syu.plot_synapses_soma(o_neuron)

neurd.synapse_utils.plot_synapses_valid_from_neuron_obj(neuron_obj, synapse_size=0.3, verbose=True)[source]
neurd.synapse_utils.plot_valid_error_synpases(neuron_obj=None, synapse_dict=None, mesh=None, original_mesh=None, synapses_type_to_plot=None, synapses_type_to_not_plot=None, keyword_to_plot=None, verbose=False, TP_color='yellow', TN_color='aqua', FP_color='black', FN_color='orange', synapse_scatter_size=0.3, plot_only_axon_skeleton=True, error_mesh_color='red', valid_mesh_color='green', valid_skeleton_color='black', mesh_alpha=0.3, print_color_key=True, mapping=None, **kwargs)[source]

Purpose: Will plot the synapse centers against a proofread neuron

Ex: output_syn_dict = syu.synapse_dict_mesh_labels_to_synapse_coordinate_dict(synapse_mesh_labels_dict=mesh_label_dict,

synapse_dict=synapse_dict)

syu.plot_valid_error_synpases(

synapse_dict=output_syn_dict, neuron_obj = None,

mesh = mesh, original_mesh = original_mesh, keyword_to_plot = “error”,

synapse_scatter_size=2,

)

neurd.synapse_utils.presyn_on_dendrite_synapses(neuron_obj, split_index=0, nucleus_id=0, return_dj_keys=False, verbose=True, **kwargs)[source]
neurd.synapse_utils.presyn_on_dendrite_synapses_after_axon_on_dendrite_filter_away(neuron_obj, axon_on_dendrite_limb_branch_dict=None, plot=False, **kwargs)[source]
neurd.synapse_utils.presyn_on_dendrite_synapses_non_axon_like(neuron_obj, limb_branch_dict=None, plot=False, **kwargs)[source]

Purpose: To get the presyns on dendrites where the dendrites are restricted to those that aren’t axon-like

neurd.synapse_utils.presyn_postsyn_groups_from_synapses_obj(synapses_obj, verbose=False)[source]
neurd.synapse_utils.presyns_on_dendrite(neuron_obj, verbose=False, return_df=False, return_column='syn_id')[source]

Purpose: Be able to find the synapses that are presyn on dendrite

Pseudocode: 1) query the synapses_df for “(label==’limb_branch’) and (compartment==’dendrite’) and (syn_type==’presyn’)”

neurd.synapse_utils.query_synapses(neuron_obj, query, return_df=False, return_index=False, return_synapses=False, return_column='syn_id', local_dict={}, limb_branch_dict=None, verbose=False, plot=False, synapse_size=1, **kwargs)[source]

Purpose: To return a dataframe

Pseudocode: 1) Create synapse dataframe 2) Use the query function to reduce the dataframe 3) Return the desired output

Note: YOU CAN SEND THIS FUNCTION A LIST OF SYNAPSES NOW

EX: syn_type = “presyn” head_neck_shaft_type = “shaft” syu.query_synapses(neuron_obj_exc_syn_sp[0][0].synapses,

query=(f”(syn_type == ‘{syn_type}’) and “

f”(head_neck_shaft == {spu.head_neck_shaft_dict[head_neck_shaft_type]})”),

return_synapses=True

)

neurd.synapse_utils.restrict_synapses_df_by_limb_branch_dict(df, limb_branch_dict)[source]
neurd.synapse_utils.set_branch_synapses_attribute(branch_obj, synapse_attribute, branch_func, catch_errors=False, default_value=None, verbose=False)[source]

Purpose: Will set the synapse attributes on a branch object

Ex: def new_func(branch_obj):

raise Exception(“”)

syu.set_branch_synapses_attribute(neuron_obj[0][0],

synapse_attribute=”compartment”, #branch_func = apu.compartment_label_from_branch_obj,

branch_func = new_func, catch_errors=True, default_value=”exception_label”,

verbose = True)

neurd.synapse_utils.set_branch_synapses_compartment(branch_obj, catch_errors=False, default_value=None, verbose=False)[source]
neurd.synapse_utils.set_limb_branch_idx_to_synapses(neuron_obj)[source]

Purpose: Will add limb and branch indexes for all synapses in a Neuron object

neurd.synapse_utils.set_neuron_synapses_compartment(neuron_obj)[source]

Purpose: Will set the compartment labels of all synapses based on the compartment label of te branch

neurd.synapse_utils.set_presyns_on_dendrite_as_errors(value)[source]
neurd.synapse_utils.set_presyns_on_dendrite_as_errors_default()[source]
neurd.synapse_utils.soma_face_offset_value(soma_name)[source]
neurd.synapse_utils.soma_synapses(neuron_obj)
neurd.synapse_utils.soma_synapses_to_scatter_info(neuron_obj, pre_color='aqua', post_color='purple', scatter_size=0.3)[source]

Purpose: To Turn the soma synapses into plottable scatters

Ex: syu.soma_synapses_to_scatter_info(neuron_obj)

neurd.synapse_utils.spine_bouton_groups_from_synapses_obj(synapses_obj, spine_bouton_labels=None, verbose=False)[source]
neurd.synapse_utils.synapse_compartment_spine_type_title(compartment_label=None, spine_label=None, syn_type=None, add_n_syn_to_title=False, verbose=False)[source]
neurd.synapse_utils.synapse_coordinates_from_synapse_df(df)[source]
neurd.synapse_utils.synapse_density(neuron_obj, synapses=None, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_head(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_neck(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_no_head(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_offset_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.synapse_density_over_limb_branch(neuron_obj, limb_branch_dict, synapse_type='synapses', multiplier=1, verbose=False, return_skeletal_length=False, density_type='skeletal_length')[source]

Purpose: To calculate the synapse density over lmb branch

Application: To be used for cell type (E/I) classification

Pseudocode: 1) Restrict the neuron branches to be processed for postsynaptic density 2) Calculate the skeletal length over the limb branch 3) Find the number of postsyns over limb branch 4) Compute postsynaptic density

Ex: syu.synapse_density_over_limb_branch(neuron_obj = neuron_obj_exc_syn_sp,

limb_branch_dict=syn_dens_limb_branch,

#neuron_obj = neuron_obj_inh_syn_sp, verbose = True, synapse_type = “synapses_post”, #synapse_type = “synapses_head”, #synapse_type = “synapses_shaft”, multiplier = 1000)

neurd.synapse_utils.synapse_density_post(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_post_offset_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.synapse_density_post_offset_distance_of_endpoint_upstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.synapse_density_post_within_distance_of_endpoint_downstream(branch_obj, distance, verbose=False, **kwargs)[source]
neurd.synapse_utils.synapse_density_pre(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_shaft(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_density_spine(neuron_obj, density_type='skeletal_length')[source]
neurd.synapse_utils.synapse_df(neuron_obj, synapse_types_to_process=None, verbose=False, add_compartment_coarse_fine=False, decode_head_neck_shaft_idx=False, **kwargs)

Purpose: To create a dataframe with all of the features of the synapses so the synapses can be queried

neurd.synapse_utils.synapse_df_abridged(neuron_obj)[source]
neurd.synapse_utils.synapse_df_from_csv(synapse_filepath, segment_id=None, coordinates_nm=True, scaling=None, verbose=True, **kwargs)[source]

Purpose: to read in a csv file

neurd.synapse_utils.synapse_df_from_synapse_dict(synapse_dict, segment_id=None)[source]

Purpose: convert synapse dict into a dataframe

neurd.synapse_utils.synapse_dict_from_synapse_csv(synapse_filepath, segment_id=None, scaling=None, verbose=True, coordinates_nm=True, **kwargs)[source]

Purpose: to injest the synapse information for a segment id in some manner

Example implementation: injesting synapse inforation from a csv

Pseudocode: 1) Read in the csv 2) Filter to the segment id 3) Creates the prepost dictionary to be filled: Iterates through pre and post (call preprost):

  1. filters for certain prepost

  2. Gets the synapse id, x, y, z and synapse size

  3. Stacks the syz and scales them if need

  4. Stores all data in dictionary for that prepost

neurd.synapse_utils.synapse_dict_from_synapse_df(df, scaling=None, verbose=True, coordinates_nm=True, syn_types=['presyn', 'postsyn'], **kwargs)[source]
neurd.synapse_utils.synapse_dict_mesh_labels_to_synapse_attribute_dict(synapse_mesh_labels_dict, synapse_dict, attribute, return_presyn_postsyn_valid_error_dict=False)[source]
neurd.synapse_utils.synapse_dict_mesh_labels_to_synapse_coordinate_dict(synapse_mesh_labels_dict, synapse_dict, return_presyn_postsyn_valid_error_dict=True)[source]
neurd.synapse_utils.synapse_dict_mesh_labels_to_synapse_volume_dict(synapse_mesh_labels_dict, synapse_dict, return_presyn_postsyn_valid_error_dict=True)[source]
neurd.synapse_utils.synapse_endpoint_dist_downstream(limb_obj, branch_idx, synapses=None, synapse_type='synapses', verbose=False)[source]
neurd.synapse_utils.synapse_endpoint_dist_upstream(limb_obj, branch_idx, synapses=None, synapse_type='synapses', verbose=False)[source]
neurd.synapse_utils.synapse_endpoint_dist_upstream_downstream(limb_obj, branch_idx, direction, synapses=None, synapse_type='synapses', verbose=False)[source]

Purpose: Will compute the upstream or downstream distance of a synapse or group of synapses

Pseudocode: 1) Get the upstream or downstream endpoint index

For each synapse 2) Get the upstream or downstream distance

  1. Return the list

Ex: from neurd import synapse_utils as syu syu.synapse_endpoint_dist_upstream_downstream(limb_obj,

branch_idx = 16, direction=”downstream”, verbose = True)

neurd.synapse_utils.synapse_filtering_vp2(neuron_obj, split_index=None, nucleus_id=None, original_mesh=None, verbose=True, compute_synapse_to_soma_skeletal_distance=False, return_synapse_filter_info=True, return_error_synapse_ids=True, return_synapse_center_data=False, return_errored_synapses_ids_non_axons=True, return_error_table_entries=True, return_neuron_obj=False, apply_non_axon_presyn_errors=True, plot_synapses=False, validation=False)[source]

Purpose: Applying the synpase filtering by using the synapses incorporated in the neuron object

neurd.synapse_utils.synapse_head_perc(neuron_obj)[source]
neurd.synapse_utils.synapse_ids_to_synapses(neuron_obj, synapse_ids, verbose=True)[source]

If have list of synapse ids and want to find the corresponding objects

(not used in queries at all)

neurd.synapse_utils.synapse_indexes_to_synapses(neuron_obj, synapse_indexes, verbose=False)[source]
neurd.synapse_utils.synapse_obj_from_dj_synapse_dict(synapse_dict)[source]

Purpose: To convert a list of dictionaries to synapses objects

Pseudocode: 1) convert the compartment_fine to just one label 2) convert the spine_bouton to the number 3) Send the new dictionary to a spine object

neurd.synapse_utils.synapse_plot_items_by_type_or_query(synapses_objs, synapses_size=0.15, synapse_plot_type='spine_bouton', synapse_compartments=None, synapse_spine_bouton_labels=None, plot_error_synapses=True, valid_synapses_color='orange', error_synapses_color='aliceblue', synapse_queries=None, synapse_queries_colors=None, verbose=False, print_spine_colors=True)[source]
neurd.synapse_utils.synapse_post_perc(neuron_obj)[source]
neurd.synapse_utils.synapse_post_perc_downstream(limb_obj, branch_idx, verbose=False)[source]

Purpose: Find the downstream downstream of a branch

Ex: syu.synapse_post_perc_downstream( limb_obj = neuron_obj_exc_syn_sp[0], branch_idx = 6, verbose = True, )

neurd.synapse_utils.synapse_post_perc_over_limb_branch_dict(neuron_obj, limb_branch_dict)[source]

Purpose: Will compute the percentage of synapses that are postsyn over a limb branch dict

Ex: lb = dict(L0=branches) syn_post_perc = syu.synapse_post_perc_over_limb_branch_dict(neuron_obj_exc_syn_sp,

lb)

neurd.synapse_utils.synapse_pre_perc(neuron_obj)[source]
neurd.synapse_utils.synapse_pre_perc_downstream(limb_obj, branch_idx, verbose=False)[source]

Purpose: Find the downstream downstream of a branch

syu.synapse_pre_perc_downstream( limb_obj = neuron_obj_exc_syn_sp[0], branch_idx = 6, verbose = True, )

neurd.synapse_utils.synapse_pre_perc_over_limb_branch_dict(neuron_obj, limb_branch_dict)[source]

Purpose: Will compute the percentage of synapses that are presyn over a limb branch dict

Ex: branches = [0,5,6,7] lb = dict(L0=branches) syn_pre_perc = syu.synapse_pre_perc_over_limb_branch_dict(neuron_obj_exc_syn_sp,

lb)

neurd.synapse_utils.synapse_pre_post_valid_errror_coordinates_dict(neuron_obj)[source]

Purpose: To make a dictionary that has the valid and error synapses

Pseudocode: For valid and error:

for presyn and postsyn 1) Get the corresponding synapses 2) extract the coordinates from the synapses 3) store in the dictionary

neurd.synapse_utils.synapse_pre_post_valid_errror_stats_dict(neuron_obj)[source]
neurd.synapse_utils.synapse_spine_perc(neuron_obj)[source]
neurd.synapse_utils.synapses(neuron_obj)[source]
neurd.synapse_utils.synapses_bouton(neuron_obj)[source]
neurd.synapse_utils.synapses_by_compartment_spine_type(neuron_obj, compartment_label=None, spine_label=None, syn_type=None, plot_synapses=False, synapse_size=0.2, verbose=False, return_title=False, add_n_syn_to_title=False, **kwargs)[source]

Purpose: To be able to get the synapses of any specification of compartment and head_neck_shaft

Pseudocode: 1) Get the spine int label 2) Get the compartment string label 3) Decide whether should be presyn or postsyn 4) Assemble query 5) Query the synapses 6) Return the synapses

neurd.synapse_utils.synapses_df(neuron_obj, synapse_types_to_process=None, verbose=False, add_compartment_coarse_fine=False, decode_head_neck_shaft_idx=False, **kwargs)[source]

Purpose: To create a dataframe with all of the features of the synapses so the synapses can be queried

neurd.synapse_utils.synapses_distance_errored(neuron_obj)[source]
neurd.synapse_utils.synapses_dj_export_dict_error(synapse, **kwargs)[source]
neurd.synapse_utils.synapses_dj_export_dict_valid(synapse, output_spine_str=True)[source]
neurd.synapse_utils.synapses_downstream(limb_obj, branch_idx, verbose=False)[source]
neurd.synapse_utils.synapses_error(neuron_obj, error_synapses_names=None, presyns_on_dendrite_as_errors=True, verbose=False)[source]

Will get all of the errored synapses stored in the object

syu.error_synpases(neuron_obj,

verbose = True)

neurd.synapse_utils.synapses_error_ids(neuron_obj)[source]
neurd.synapse_utils.synapses_error_old(neuron_obj, error_synapses_names=None, presyns_on_dendrite_as_errors=True, verbose=False)[source]

Will get all of the errored synapses stored in the object

syu.error_synpases(neuron_obj,

verbose = True)

neurd.synapse_utils.synapses_error_post(neuron_obj)[source]
neurd.synapse_utils.synapses_error_post_coordinates(neuron_obj)[source]
neurd.synapse_utils.synapses_error_pre(neuron_obj)[source]
neurd.synapse_utils.synapses_error_pre_coordinates(neuron_obj)[source]
neurd.synapse_utils.synapses_head(neuron_obj)[source]
neurd.synapse_utils.synapses_mesh_errored(neuron_obj)[source]
neurd.synapse_utils.synapses_neck(neuron_obj)[source]
neurd.synapse_utils.synapses_no_head(neuron_obj)[source]
neurd.synapse_utils.synapses_non_bouton(neuron_obj)[source]
neurd.synapse_utils.synapses_obj_groups_from_queries(synapses_obj, queries, verbose=False)[source]

Purpose; Will divide synapses ob into groups based on a xeries of queries

neurd.synapse_utils.synapses_offset_distance_of_endpoint_upstream_downstream(branch_obj, direction, distance, synapse_type='synapses', verbose=True)[source]

Purpose: Will measure the number of synapses within a certain distance of the upstream or downstream endpoint

Pseudocode: 1) Get the desired synapses 2) Query the synapses based on the direction attribute

Ex: branch_obj = neuron_obj[0][2] synapses_within_upstream_downstream_endpoint(branch_obj,

direction=”downstream”, distance =15000, synapse_type=”synapses_post”, )

neurd.synapse_utils.synapses_over_limb_branch_dict(neuron_obj, limb_branch_dict, synapse_type='synapses', plot_synapses=False)[source]
neurd.synapse_utils.synapses_post(neuron_obj=None)[source]
neurd.synapse_utils.synapses_post_downstream(limb_obj, branch_idx, verbose=False)[source]

Purpose: Find the downstream downstream of a branch

syu.synapse_pre_perc_downstream( limb_obj = neuron_obj_exc_syn_sp[0], branch_idx = 6, verbose = True, )

neurd.synapse_utils.synapses_post_head(neuron_obj)[source]
neurd.synapse_utils.synapses_post_neck(neuron_obj)[source]
neurd.synapse_utils.synapses_post_no_head(neuron_obj)[source]
neurd.synapse_utils.synapses_post_shaft(neuron_obj)[source]
neurd.synapse_utils.synapses_post_spine(neuron_obj)[source]
neurd.synapse_utils.synapses_pre(neuron_obj)[source]
neurd.synapse_utils.synapses_pre_downstream(limb_obj, branch_idx, verbose=False)[source]

Purpose: Find the downstream downstream of a branch

syu.synapse_pre_perc_downstream( limb_obj = neuron_obj_exc_syn_sp[0], branch_idx = 6, verbose = True, )

neurd.synapse_utils.synapses_pre_shaft(neuron_obj)[source]
neurd.synapse_utils.synapses_shaft(neuron_obj)[source]
neurd.synapse_utils.synapses_somas(neuron_obj)[source]
neurd.synapse_utils.synapses_somas_postsyn(neuron_obj, verbose=False, plot=False, **kwargs)[source]
neurd.synapse_utils.synapses_spine(neuron_obj)[source]
neurd.synapse_utils.synapses_to_coordinates(synapses)[source]
neurd.synapse_utils.synapses_to_dj_keys(self, neuron_obj, valid_synapses=True, verbose=False, nucleus_id=None, split_index=None, output_spine_str=True, add_secondary_segment=True, ver=None, synapses=None, key=None)[source]

Pseudocode: 1) Get either the valid of invalid synapses 2) For each synapses export the following as dict

synapse_id=syn, synapse_type=synapse_type, nucleus_id = nucleus_id, segment_id = segment_id, split_index = split_index, skeletal_distance_to_soma=np.round(syn_dist[syn_i],2),

return the list

Ex: import numpy as np dj_keys = syu.synapses_to_dj_keys(neuron_obj,

verbose = True, nucleus_id=12345, split_index=0)

np.unique([k[“compartment”] for k in dj_keys],return_counts=True)

Ex: How to get error keys with version: dj_keys = syu.synapses_to_dj_keys(neuron_obj,

valid_synapses = False,

verbose = True, nucleus_id=12345, split_index=0,

ver=158)

neurd.synapse_utils.synapses_to_dj_keys_old(neuron_obj, valid_synapses=True, verbose=False, nucleus_id=None, split_index=None, output_spine_str=True, ver=None)[source]

Pseudocode: 1) Get either the valid of invalid synapses 2) For each synapses export the following as dict

synapse_id=syn, synapse_type=synapse_type, nucleus_id = nucleus_id, segment_id = segment_id, split_index = split_index, skeletal_distance_to_soma=np.round(syn_dist[syn_i],2),

return the list

Ex: from datasci_tools import numpy_dep as np dj_keys = syu.synapses_to_dj_keys(neuron_obj,

verbose = True, nucleus_id=12345, split_index=0)

np.unique([k[“compartment”] for k in dj_keys],return_counts=True)

Ex: How to get error keys with version: dj_keys = syu.synapses_to_dj_keys(neuron_obj,

valid_synapses = False,

verbose = True, nucleus_id=12345, split_index=0,

ver=158)

neurd.synapse_utils.synapses_to_exports(synapses)[source]
neurd.synapse_utils.synapses_to_feature(synapses, feature)[source]
neurd.synapse_utils.synapses_to_synapse_ids(synapses)[source]
neurd.synapse_utils.synapses_to_synapses_df(synapses, label='no_label', add_compartment_coarse_fine=False, decode_head_neck_shaft_idx=False)[source]
neurd.synapse_utils.synapses_total(neuron_obj)[source]

Purpose: to find the total number of synapses stored in the neuron object

Pseducode: 1) get all of the soma synapses 2) get all of the errored synapses 3) get all of the limb_branch synapses

neurd.synapse_utils.synapses_type_and_head_neck_shaft(neuron_obj, syn_type, head_neck_shaft_type)[source]
neurd.synapse_utils.synapses_valid(neuron_obj, include_soma=True)[source]

Will return all valid synapses and somas

neurd.synapse_utils.synapses_valid_old(neuron_obj, include_soma=True)[source]

Will return all valid synapses and somas

neurd.synapse_utils.synapses_valid_post(neuron_obj)[source]
neurd.synapse_utils.synapses_valid_post_coordinates(neuron_obj)[source]
neurd.synapse_utils.synapses_valid_pre(neuron_obj)[source]
neurd.synapse_utils.synapses_valid_pre_coordinates(neuron_obj)[source]
neurd.synapse_utils.synapses_with_feature(synapses, feature_name, comparison_value, operator_func=<built-in function eq>, verbose=False)[source]

Purpose: Will find synapses with a certain feature

Possible operators: operator.eq operator.gt operator.ge operator.lt operator.le

Ex: synapses_with_feature(neuron_obj.synapses,

feature_name=”syn_type”, comparison_value=”postsyn”, verbose=True)

Ex: import operator syu.calculate_neuron_soma_distance(neuron_obj) syn_from_soma = syu.synapses_with_feature(neuron_obj.synapses,

feature_name = “soma_distance”, comparison_value = 4000, operator_func=operator.le, verbose = True)

syn_coords = syu.synapses_to_coordinates(syn_from_soma)

nviz.plot_objects(neuron_obj.mesh,

scatters=[syn_coords])

neurd.synapse_utils.synapses_within_distance_of_endpoint_upstream_downstream(branch_obj, direction, distance, synapse_type='synapses', verbose=True)[source]

Purpose: Will measure the number of synapses within a certain distance of the upstream or downstream endpoint

Pseudocode: 1) Get the desired synapses 2) Query the synapses based on the direction attribute

Ex: branch_obj = neuron_obj[0][2] synapses_within_upstream_downstream_endpoint(branch_obj,

direction=”downstream”, distance =15000, synapse_type=”synapses_post”, )

neurd.synapse_utils.valid_error_groups_from_synapses_obj(synapses_obj)[source]
neurd.synapse_utils.valid_query()[source]

neurd.vdi_default module

class neurd.vdi_default.DataInterfaceBoilerplate(source='default', **kwargs)[source]

Bases: ABC

__init__(source='default', **kwargs)[source]
add_nm_to_synapse_df(df)[source]
align_array(array, align_matrix=None, **kwargs)[source]
align_mesh(mesh, align_matrix=None, **kwargs)[source]
align_neuron_obj(neuron_obj, align_matrix=None, **kwargs)[source]

Keep the body of function as “pass” unless the neuron obj needs to be rotated so axon is pointing down

align_neuron_obj_from_align_matrix(neuron_obj, align_matrix=None, **kwargs)[source]
align_skeleton(skeleton, align_matrix=None, **kwargs)[source]
cell_type(neuron_obj)[source]
load_neuron_obj(segment_id=None, mesh_decimated=None, mesh_filepath=None, meshes_directory=None, filepath=None, directory=None, filename=None, suffix='', verbose=False, **kwargs)[source]
multiplicity(neuron_obj)[source]

For those who don’t store the output of each stage in the neuron obj this function could be redefined to pull from a database

nucleus_id(neuron_obj)[source]
pre_post_synapse_ids_coords_from_connectome(segment_id_pre, segment_id_post, split_index_pre=0, split_index_post=0, synapse_pre_df=None, verbose=False)[source]
proof_version = 7
save_neuron_obj(neuron_obj, directory=None, filename=None, suffix='', verbose=False)[source]
Parameters:
  • neuron_obj (_type_) –

  • directory (_type_, optional) – by default None

  • filename (_type_, optional) – by default None

  • suffix (str, optional) – by default ‘’

  • verbose (bool, optional) – by default False

Return type:

_type_

segment_id_and_split_index(segment_id, split_index=0, return_dict=False)[source]
segment_id_to_synapse_dict(segment_id, **kwargs)[source]

Purpose: return a dictionary containing the presyn and postsyn information for a certain segment from the backend datasource implmeneted for the data. The structure of the returned dictionary should in the following format where all coordinates and sizes ARE SCALED TO NM ALREADY

syn_dict = dict(
presyn = dict(

synapse_ids= np.array (N), synapse_coordinates = np.array (Nx3), synapse_sizes = np.array (N),

), postsyn = dict(

synapse_ids= np.array (N), synapse_coordinates = np.array (Nx3), synapse_sizes = np.array (N),

)

)

The default implementation assumes there is a local synapse csv file (whose path needs to be passed as an argument or set with as an object attribute) with the following columns

segment_id, segment_id_secondary, synapse_id, prepost, # presyn or postsyn synapse_x, # in voxel coordinates synapse_y, # in voxel coordinates synapse_z, # in voxel coordinates synapse_size, # in voxel coordinates

Example Implementation

cave_client_utils.synapse_df_from_seg_id

segment_id_to_synapse_table_optimized(segment_id, synapse_type=None, filter_away_self_synapses=True, coordinates_nm=True, synapse_filepath=None, **kwargs)[source]

Purpose: Given a segment id (or neuron obj) will retrieve the synapses from a backend synapse implementation renamed in a particular manner

Parameters:
  • segment_id (int or neuron.Neuron) –

  • synapse_type (_type_, optional) – by default None

  • filter_away_self_synapses (bool, optional) – by default True

  • coordinates_nm (bool, optional) – by default True

  • synapse_filepath (_type_, optional) – by default None

Return type:

_type_

Raises:

Exception

set_parameters_for_directory_modules(directory=None, verbose=False, obj=None, **kwargs)[source]
set_parameters_obj()[source]

Purpose: To set the parameters obj using the

set_parameters_obj_from_dict(parameters, verbose=False, **kwargs)[source]
set_parameters_obj_from_filepath(filepath=None, set_module_parameters=True)[source]
unalign_neuron_obj(neuron_obj, align_matrix=None, **kwargs)[source]

Keep the body of function as “pass” unless the neuron obj needs to be rotated so axon is pointing down

unalign_neuron_obj_from_align_matrix(neuron_obj, align_matrix=None, **kwargs)[source]
property vdi
class neurd.vdi_default.DataInterfaceDefault(*args, **kwargs)[source]

Bases: DataInterfaceBoilerplate

Class to outline what functions to overload in implement a volume data interface that will work with NEURD. All methods exposed fall under the following categories

  1. required abstract method

  2. data fetchers/setters

  3. autoproofreading filter settings

All fetchers and setters have a default implementation where data is stored locally in csv (for synapses) or locally in the neuron object. If exporting data to non-local source (ex: database) override these functions to pull from other these sources

__init__(*args, **kwargs)[source]
property default_low_degree_graph_filters

The graph filters to be using the ‘exc_low_degree_branching_filter’ for autoproofreading that inspects axon branches with exactly 2 downstream nodes and classifies as an error based on if one fo the following graph filters has a successful match. Overriding this function could be simply excluding some filters that are not applicable/work for your volume even with parameters tuned

Return type:

List[graph filter functions]

exc_filters_auto_proof(**kwargs)[source]

All autoproofreading filters (referenced in proofreading_utils.py) that will be used for excitatory cells

Return type:

List[filter objects]

fetch_proofread_mesh(segment_id: int | Neuron, split_index: int = 0, plot_mesh: bool = False, **kwargs) Trimesh[source]

Retrieve mesh after autoproofreading filtering. Default implementation uses a local solution of extracting the mesh from the neuron object, but the proofreading mesh could be stored in an external database with only the segment id and split index needed to retrieve.

Parameters:
  • segment_id (Union[int,Neuron]) – proofread neuron object from which the mesh can be extracted or an int representing the segment id for external database implementation where saved products indexed by unique segment_id and split index

  • split_index (int, optional) – for external database implementation where saved products indexed by unique segment_id and split index, by default 0

  • plot_mesh (bool, optional) – by default False

Returns:

auto proofread mesh

Return type:

trimesh.Trimesh

fetch_segment_id_mesh(segment_id: int | None = None, meshes_directory: str | None = None, mesh_filepath: str | None = None, plot: bool = False, ext: str = 'off') Trimesh[source]

Purpose: retrieve a decimated segment id mesh. Current implementation assumes a local filepath storing all meshes.

Parameters:
  • segment_id (int, optional) – neuron segment id, by default None

  • meshes_directory (str, optional) – location of decimated mesh files, by default None

  • mesh_filepath (str, optional) – complete path of location and filename for neuron , by default None

  • plot (bool, optional) – by default False

  • ext (str, optional) – the file extension for mesh storage, by default “off”

Returns:

decimated mesh for segment id

Return type:

trimesh.Trimesh

fetch_soma_mesh(segment_id: int | Neuron, split_index: int = 0, plot_mesh: bool = False, **kwargs)[source]

Retrieve soma mesh. Default implementation uses a local solution of extracting the soma mesh from the neuron object, but the soma mesh could be stored in an external database with only the segment id and split index needed to retrieve.

Parameters:
  • segment_id (Union[int,Neuron]) – neuron object from which the mesh can be extracted or an int representing the segment id for external database implementation where saved products indexed by unique segment_id and split index

  • split_index (int, optional) – for external database implementation where saved products indexed by unique segment_id and split index, by default 0

  • plot_mesh (bool, optional) – by default False

Returns:

auto proofread mesh

Return type:

trimesh.Trimesh

fetch_undecimated_segment_id_mesh(segment_id: int, meshes_directory: str | None = None, plot: bool = False, ext: str = 'off') Trimesh[source]
Parameters:
  • segment_id (int) –

  • meshes_directory (str, optional) – by default None

  • plot (bool, optional) – by default False

  • ext (str, optional) – by default “off”

Returns:

undecimated mesh for segment id

Return type:

trimesh.Trimesh

abstract get_align_matrix(*args, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: a transformation matrix (call A, 3x3) that when applied to a matrix of 3D coordinates (call B, Nx3) as a matrix multiplication of C = BA will produce a new matrix of rotated coordinates (call C, Nx3) so that all coordinates or a mesh or skeleton are rotated to ensure that the apical of the neuron is generally direted in the positive z direction.

graph_obj_from_proof_stage(segment_id: int | Neuron, split_index: int = 0, clean: bool = True, verbose: bool = False, **kwargs) DiGraph[source]

Purpose: Retrieve the lite neuron_obj (implemented). Local implementation where retrieved from pipeline products of neuron obj but could override to fetch from an external store using the segment id and split index

Parameters:
  • segment_id (Union[int,Neuron]) –

  • split_index (int, optional) – by default 0

  • clean (bool, optional) – by default True

  • verbose (bool, optional) – by default False

Returns:

neuron_obj_lite

Return type:

nx.DiGraph

inh_filters_auto_proof(**kwargs)[source]

All autoproofreading filters (referenced in proofreading_utils.py) that will be used for inhibitory cells

Return type:

List[filter functions]

load_neuron_obj_auto_proof(segment_id: str, mesh_decimated: Trimesh | None = None, directory: str | None = None, compressed=True, **kwargs) Neuron[source]

Loading an external neuron file into a python object. Current implementation assumes the default .pbz2 method of compression that does not store the mesh information, which is why the mesh object needs to be passed as an argument

Parameters:
  • segment_id (str) –

  • mesh_decimated (trimesh.Trimesh, optional) – the original decimated mesh before any proofreaidng, by default None

  • directory (str, optional) – filepath location of saved .pbz2 file, by default self.neuron_obj_auto_proof_directory

Return type:

Neuron

nuclei_classification_info_from_nucleus_id(nuclei: int, *args, **kwargs) dict[source]

Purpose: To return a dictionary of cell type information (same structure as from the allen institute of brain science CAVE client return) from an external database. No external database currently set up so None filled dictionary returned.

Example Returns:

{

‘external_cell_type’: ‘excitatory’, ‘external_cell_type_n_nuc’: 1, ‘external_cell_type_fine’: ‘23P’, ‘external_cell_type_fine_n_nuc’: 1, ‘external_cell_type_fine_e_i’: ‘excitatory’

}

Parameters:

nuclei (int) –

Returns:

nuclei info about classification (fine and coarse)

Return type:

dict

nuclei_from_segment_id(segment_id: int, return_centers: bool = True, return_nm: bool = True) array[source]

retrieves the nuclei id (and possibly the nuclei centers) from an external database. No external database currently set so currently set to None returns.

Parameters:
  • segment_id (int) –

  • return_centers (bool, optional) – whether to return the nuclei center coordinates along with the ids, by default True

  • return_nm (bool, optional) – whether to return nuclei center coordinates in nm units, by default True

Returns:

  • nuclei_ids (np.array (N,)) – nuclei ids corresponding to segment_id

  • nuclei_centers (np.array (N,3), optional) – center locations for the corresponding nuclei

save_neuron_obj_auto_proof(neuron_obj: Neuron, directory: str | None = None, filename: str | None = None, suffix: str | None = None, verbose: bool = False, compressed=True) str[source]

Save a neuron object in the autoproofreading directory (using the default pbz2 compressed method that does not save the mesh along with it). Typical This is the current local implementation, should be overriden if the proofreading neuron objects are to be saved in an external store

Default filename: {segment_id}.pbz2

Parameters:
  • neuron_obj (Neuron) –

  • directory (str, optional) – location for storing .pbz2 files, by default None

  • filename (str, optional) – a custom name for compressed neuron file to replace the default name, by default None

  • suffix (str, optional) – change filename to {segment_id}{suffix}.pbz2 , by default None

  • verbose (bool, optional) – by default False

Returns:

filepath of saved neuron file

Return type:

str

abstract segment_id_to_synapse_df(segment_id, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: return a dataframe with the presyn and postsyn information for a certain segment from the backend data source. The structure of the dataframe should return the following columns

segment_id, segment_id_secondary, synapse_id, prepost, # presyn or postsyn synapse_x, # in voxel coordinates synapse_y, # in voxel coordinates synapse_z, # in voxel coordinates synapse_size, # in voxel coordinates

The default implementation assumes there is a local synapse csv file (whose path needs to be passed as an argument or set with as an object attribute)

Parameters:
  • segment_id (int) –

  • coordinates_nm (bool) – Whether to scale the coordinate to nm units

  • scaling (np.array) – The scaling factor to use

Returns:

dataframe with all of the relevant synapse information for one segment id

Return type:

pd.DataFrame

segment_id_to_synapse_table_optimized_connectome(segment_id: int | Neuron, split_index: int = 0, synapse_type: str | None = None, coordinates_nm: bool = False, **kwargs)[source]

Purpose: to return a dataframe of the connections before proofreading with the constraint of one segment_id/split_index as a presyn or postsyn. Not implemented for local storage

segment_id_to_synapse_table_optimized_proofread(segment_id: int | Neuron, split_index: int = 0, synapse_type: str = None, **kwargs)[source]

Purpose: to return a dataframe of the valid connections in the proofread segment/split. Currently only implemented for local solution of where synapse information stored in local csv and proofrad synapses are stored in neuron object. Could override to pull original or proofread synapses from an external source.

Parameters:
  • segment_id (Union[int,Neuron]) – neuron obj with proofread synapses, or just segment id if synapses stored externally

  • split_index (int, optional) – identifier for segment if stored externally, by default 0

  • synapse_type (str, optional) – presyn or postsyn restriction, by default None

Returns:

synapse_df

Return type:

pd.DataFrame

set_synapse_filepath(synapse_filepath: str) None[source]

sets the location and filename of the synapse csv for the default implementation that loads synapses from a local csv file

Parameters:

synapse_filepath (str) – complete folder path and filename for synapse csv

soma_nm_coordinate(segment_id: int | Neuron, split_index: int = 0, return_dict: bool = False, **kwargs) array[source]

Return the soma coordinate for a segment. Implemented with local solution of accepting neuron object but could override with external store fetching.

Parameters:
  • segment_id (Union[int,Neuron]) –

  • split_index (int, optional) – by default 0

  • return_dict (bool, optional) – by default False

Returns:

soma coordinate

Return type:

np.array (3,)

abstract property voxel_to_nm_scaling

*REQUIRED OVERRIDE*

Purpose: Provide a 1x3 numpy matrix representing the scaling of voxel units to nm units. If the data is already in nm format then just assign a ones matrix

Returns:

scaling_vector – vector that can convert a matrix or vector of 3D voxel coordinates to 3D nm coordinates (default: np.array([1,1,1]))

Return type:

np.array

neurd.vdi_default.neuron_obj_func(func)[source]

neurd.vdi_h01 module

class neurd.vdi_h01.DataInterfaceH01(**kwargs)[source]

Bases: DataInterfaceDefault

__init__(**kwargs)[source]
property default_low_degree_graph_filters

The graph filters to be using the ‘exc_low_degree_branching_filter’ for autoproofreading that inspects axon branches with exactly 2 downstream nodes and classifies as an error based on if one fo the following graph filters has a successful match. Overriding this function could be simply excluding some filters that are not applicable/work for your volume even with parameters tuned

Return type:

List[graph filter functions]

get_align_matrix(neuron_obj=None, soma_center=None, rotation=None, verbose=False, **kwargs)[source]

Purpose: generating the alignment matrix from the soma center (which shows rotation only dependent on location of cell in volume)

segment_id_to_synapse_df(*args, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: return a dataframe with the presyn and postsyn information for a certain segment from the backend data source. The structure of the dataframe should return the following columns

segment_id, segment_id_secondary, synapse_id, prepost, # presyn or postsyn synapse_x, # in voxel coordinates synapse_y, # in voxel coordinates synapse_z, # in voxel coordinates synapse_size, # in voxel coordinates

The default implementation assumes there is a local synapse csv file (whose path needs to be passed as an argument or set with as an object attribute)

Parameters:
  • segment_id (int) –

  • coordinates_nm (bool) – Whether to scale the coordinate to nm units

  • scaling (np.array) – The scaling factor to use

Returns:

dataframe with all of the relevant synapse information for one segment id

Return type:

pd.DataFrame

property voxel_to_nm_scaling

*REQUIRED OVERRIDE*

Purpose: Provide a 1x3 numpy matrix representing the scaling of voxel units to nm units. If the data is already in nm format then just assign a ones matrix

Returns:

scaling_vector – vector that can convert a matrix or vector of 3D voxel coordinates to 3D nm coordinates (default: np.array([1,1,1]))

Return type:

np.array

neurd.vdi_microns module

class neurd.vdi_microns.DataInterfaceMicrons(**kwargs)[source]

Bases: DataInterfaceDefault

__init__(**kwargs)[source]
get_align_matrix(*args, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: a transformation matrix (call A, 3x3) that when applied to a matrix of 3D coordinates (call B, Nx3) as a matrix multiplication of C = BA will produce a new matrix of rotated coordinates (call C, Nx3) so that all coordinates or a mesh or skeleton are rotated to ensure that the apical of the neuron is generally direted in the positive z direction.

segment_id_to_synapse_df(*args, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: return a dataframe with the presyn and postsyn information for a certain segment from the backend data source. The structure of the dataframe should return the following columns

segment_id, segment_id_secondary, synapse_id, prepost, # presyn or postsyn synapse_x, # in voxel coordinates synapse_y, # in voxel coordinates synapse_z, # in voxel coordinates synapse_size, # in voxel coordinates

The default implementation assumes there is a local synapse csv file (whose path needs to be passed as an argument or set with as an object attribute)

Parameters:
  • segment_id (int) –

  • coordinates_nm (bool) – Whether to scale the coordinate to nm units

  • scaling (np.array) – The scaling factor to use

Returns:

dataframe with all of the relevant synapse information for one segment id

Return type:

pd.DataFrame

property voxel_to_nm_scaling

*REQUIRED OVERRIDE*

Purpose: Provide a 1x3 numpy matrix representing the scaling of voxel units to nm units. If the data is already in nm format then just assign a ones matrix

Returns:

scaling_vector – vector that can convert a matrix or vector of 3D voxel coordinates to 3D nm coordinates (default: np.array([1,1,1]))

Return type:

np.array

neurd.vdi_microns_cave module

class neurd.vdi_microns_cave.DataInterfaceMicrons(release_name=None, env_filepath=None, cave_token=None, client=None, **kwargs)[source]

Bases: DataInterfaceDefault

__init__(release_name=None, env_filepath=None, cave_token=None, client=None, **kwargs)[source]
fetch_segment_id_mesh(segment_id=None, plot=False)[source]
get_align_matrix(*args, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: a transformation matrix (call A, 3x3) that when applied to a matrix of 3D coordinates (call B, Nx3) as a matrix multiplication of C = BA will produce a new matrix of rotated coordinates (call C, Nx3) so that all coordinates or a mesh or skeleton are rotated to ensure that the apical of the neuron is generally direted in the positive z direction.

segment_id_to_synapse_df(segment_id, *args, **kwargs)[source]

*REQUIRED OVERRIDE*

Purpose: return a dataframe with the presyn and postsyn information for a certain segment from the backend data source. The structure of the dataframe should return the following columns

segment_id, segment_id_secondary, synapse_id, prepost, # presyn or postsyn synapse_x, # in voxel coordinates synapse_y, # in voxel coordinates synapse_z, # in voxel coordinates synapse_size, # in voxel coordinates

The default implementation assumes there is a local synapse csv file (whose path needs to be passed as an argument or set with as an object attribute)

Parameters:
  • segment_id (int) –

  • coordinates_nm (bool) – Whether to scale the coordinate to nm units

  • scaling (np.array) – The scaling factor to use

Returns:

dataframe with all of the relevant synapse information for one segment id

Return type:

pd.DataFrame

property voxel_to_nm_scaling

*REQUIRED OVERRIDE*

Purpose: Provide a 1x3 numpy matrix representing the scaling of voxel units to nm units. If the data is already in nm format then just assign a ones matrix

Returns:

scaling_vector – vector that can convert a matrix or vector of 3D voxel coordinates to 3D nm coordinates (default: np.array([1,1,1]))

Return type:

np.array

neurd.version module

neurd.volume_utils module

class neurd.volume_utils.DataInterface(source, voxel_to_nm_scaling=None, synapse_filepath=None)[source]

Bases: ABC

__init__(source, voxel_to_nm_scaling=None, synapse_filepath=None)[source]
abstract align_array()[source]
abstract align_mesh()[source]
abstract align_neuron_obj()[source]
abstract align_skeleton()[source]
nuclei_classification_info_from_nucleus_id(nuclei, *args, **kwargs)[source]

Purpose: To return a dictionary of cell type information (same structure as from the allen institute of brain science CAVE client return)

Example Returns:

{

‘external_cell_type’: ‘excitatory’, ‘external_cell_type_n_nuc’: 1, ‘external_cell_type_fine’: ‘23P’, ‘external_cell_type_fine_n_nuc’: 1, ‘external_cell_type_fine_e_i’: ‘excitatory’

}

nuclei_from_segment_id(segment_id, return_centers=True, return_nm=True)[source]

Purpose: To returns the nuclues information corresponding to a segment

abstract segment_id_to_synapse_dict(**kwargs)[source]
set_synapse_filepath(synapse_filepath)[source]
abstract unalign_neuron_obj()[source]

neurd.width_utils module

neurd.width_utils.calculate_new_width(branch, skeleton_segment_size=1000, width_segment_size=None, return_average=False, distance_by_mesh_center=True, no_spines=True, summary_measure='mean', no_boutons=False, print_flag=False, distance_threshold_as_branch_width=False, distance_threshold=3000, old_width_calculation=None)[source]

Purpose: To calculate the overall width

Ex: curr_branch_obj = neuron_obj_exc_syn_sp[4][30]

skeleton_segment_size = 1000 width_segment_size=None width_name = “no_spine_average” distance_by_mesh_center= True

no_spines = True summary_measure = “mean” current_width_array,current_width = wu.calculate_new_width(curr_branch_obj,

skeleton_segment_size=skeleton_segment_size, width_segment_size=width_segment_size, distance_by_mesh_center=distance_by_mesh_center, no_spines=no_spines, summary_measure=summary_measure, return_average=True, print_flag=True,

)

neurd.width_utils.calculate_new_width_for_neuron_obj(neuron_obj, skeleton_segment_size=1000, width_segment_size=None, width_name=None, distance_by_mesh_center=True, no_spines=True, summary_measure='mean', limb_branch_dict=None, verbose=True, skip_no_spine_width_if_no_spine=True, **kwargs)[source]

Purpose: To calculate new width definitions based on if 1) Want to use skeleton center or mesh center 2) Want to include spines or not

Examples: current_neuron.calculate_new_width(no_spines=False,

distance_by_mesh_center=True)

current_neuron.calculate_new_width(no_spines=False,

distance_by_mesh_center=True, summary_measure=”median”)

current_neuron.calculate_new_width(no_spines=True,

distance_by_mesh_center=True, summary_measure=”mean”)

current_neuron.calculate_new_width(no_spines=True,

distance_by_mesh_center=True, summary_measure=”median”)

neurd.width_utils.find_mesh_width_array_border(curr_limb, node_1, node_2, width_name='no_spine_median_mesh_center', segment_start=1, segment_end=4, skeleton_segment_size=None, width_segment_size=None, recalculate_width_array=False, default_segment_size=1000, no_spines=True, summary_measure='mean', print_flag=True, **kwargs)[source]

Purpose: To send back an array that represents the widths of curent branches at their boundary - the widths may be calculated differently than currently

stored if specified so

Applications: 1) Will help with filtering out false positives with the axon detection 2) For merge detections to help detect large width change

Process: 0) make sure the two nodes are connected in the concept network 1) if the skeleton_segment_size and width_semgent is None then recalculate the width array - send the 2) calculate the endpoints from the skeletons (to ensure they are in the right order) 3) find the connectivity of the endpoints 4) Get the subarrays of the width_arrays according to the start and end specified 5) return the subarrays

Example of Use: find_mesh_width_array_border(curr_limb=curr_limb_obj,

#node_1=56, #node_2=71, node_1 = 8, node_2 = 5,

width_name = “no_spine_average_mesh_center”, segment_start = 1, segment_end = 4, skeleton_segment_size = 50, width_segment_size = None, recalculate_width_array = True, #will automatically recalculate the width array default_segment_size = 1000, print_flag=True )

neurd.width_utils.neuron_width_calculation_standard(neuron_obj, widths_to_calculate=('median_mesh_center', 'no_spine_median_mesh_center'), verbose=True, limb_branch_dict=None, **kwargs)[source]
neurd.width_utils.new_width_from_mesh_skeleton(skeleton, mesh, skeleton_segment_size=1000, width_segment_size=None, return_average=True, distance_by_mesh_center=True, distance_threshold_as_branch_width=False, distance_threshold=3000, summary_measure='median', verbose=False, backup_width=None)[source]

Purpose: To calculate the new width from a skeleton and the surounding mesh

neurd.width_utils.skeleton_resized_ordered(skeleton, skeleton_segment_size=1000, width_segment_size=None, return_skeletal_points=False, verbose=False)[source]

Module contents

neurd.set_volume_params(volume='microns', verbose=False, verbose_loop=False)[source]