I'm trying to use Python to access Nastran quad element shell forces via the H5 file. I've had good luck with processing SPC, MPC, CBUSH, and BAR forces, but when trying to extract shell forces from a large model my Python script is taking a SUPER long time. For example, I have an H5 file that contains results for 93,407 Quad4 elements for 28 subcases. The snippet of code below is what I'm using:
out = open("outfile.csv,"w") # - open the CSV file to write to
for i in quad_ids:
out.write("%d"%i)
for case in subcases:
print "case = %s"%case['subtitle']
domid = case['domid']
for a in quad.where("(DOMAIN_ID == %d) & (%s == %d)"%(domid,keyid,i)):
for b in keys:
out.write(",%f" % (a[b][0]))
out.write("\n")
out.close()
data.close()
The quad_ids[] list contains all the quad element ids, and the subcases{} is a dictionary that contains the subcase title and domain id.
I ran this code on the H5 file and it ran for 6 hours straight before I finally killed it. It was working ok, but it ran super, super slow. The total number of records in the QUAD4_CN table is 94,407 x 28 = 2,615,396.
I'm a novice at using PyTables, so I tried my best to optimize the speed using the in-line query (which supposedly uses the C-compiled search). Has anyone else experienced this slowness and can offer any advice on speeding up my code?
BTW, your force results are a NumPy array (not a list). This occurs when you request results at multiple locations (centroid + grids). You will see this in other places: element stresses and strains are examples. Note: PyTables and h5py can work with an array of arrays (like your data structure). You can see how this works if you deconstruct this statement: out.write(",%f" % (a[b][0]))
a is the object returned from the .where() query (in your case it's one row of data)
a[b] is the 'b' field/column of data (eg the fields for MX, MY, etc)
a[b][0] is the first element in that field, in this case at the element centroid. a[b][1] would be force at the first grid, etc.
BTW, your force results are a NumPy array (not a list). This occurs when you request results at multiple locations (centroid + grids). You will see this in other places: element stresses and strains are examples. Note: PyTables and h5py can work with an array of arrays (like your data structure). You can see how this works if you deconstruct this statement: out.write(",%f" % (a[b][0]))
a is the object returned from the .where() query (in your case it's one row of data)
a[b] is the 'b' field/column of data (eg the fields for MX, MY, etc)
a[b][0] is the first element in that field, in this case at the element centroid. a[b][1] would be force at the first grid, etc.