numpy - Python float types vs Decimal -


i intern @ tcd, physics.

i wrote code perform data analysis on random particle packings. code written in python.

the code reads in columns of data .txt file, provided supervisor.

here example of data:

0.52425196504624715921  0.89754790324432953685  0.44222783508101531913 

i wrote following code read in data:

from decimal import decimal numpy import *  c1,c2,r = loadtxt("second.txt", usecols=(0,1,2), unpack=true, dtype = dtype(decimal)) 

as can see used decimal dtype read in decimal places of number, , sure calculations reliable.

now, in order speed code, wondering if there numpy dtype make things faster, still using decimal places, though. did try:

c1,c2,r = loadtxt("second.txt", usecols=(1,2,4), unpack=true, dtype = float128) 

and

c1,c2,r = loadtxt("second.txt", usecols=(1,2,4), unpack=true, dtype = longdouble) 

however here output compared:

decimal :    0.98924652608783791852 float128 :   0.98924653 longdouble : 0.98924653 float32 :    0.98924655 

i using desktop, 64bit, 4gb ram.

well, tried writing code in c++ multiplies every element of columns of data enough powers of 10 such new .txt file produced following columns:

52425196504624715921    89754790324432953685    44222783508101531913 

but when python code (i have in python):

  c1,c2,r = loadtxt("second.txt", usecols=(1,2,4), unpack=true, dtype = int) 

i following error message:

 overflowerror: python int large convert c long 

please pardon limited knowledge of language.

any solution extremely welcome: need speed code, either reading directly data or performing required conversion integers.


Comments