SCIENTIFIC-LINUX-USERS Archives

March 2013

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Keith Lofstrom <[log in to unmask]>
Reply To:
Date:
Fri, 15 Mar 2013 08:59:33 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (31 lines)
I'm doing some very big phased array calculations on an oldish
Core2 Duo, preparing to migrate the inner loops to an nVidia
GPU.  These calculations do a lot of differencing when
computing nulls in the interference patterns (as does nature!)
and I presume that single precision will do them relatively
inaccurately.

I'm running 32 bit SL6.2 on the test machine, and gcc with libm .

I ran two calculations side by side, one with floats and one
with doubles, and they appear to have done the exact same thing,
even the same runtime, interesting given that 90% of the
calculation is a sin() and cos() calculation in a tight loop.
One would expect that the double precision calculation would
have more iterations and be slower.  And of course slightly
different placements for the nulls.

What am I missing?  Are double and float synonyms for the same
double precision representation?  If so, how do I emulate the
single-precision behavior of the GPU?  Note that the outer
outer outer loop of the calculation takes 6 days, though once
I locate some differences in a very large simulation field,
I can restrict the field and work faster.

Keith

(What?  Using scientific linux for Science? Oh, the horror...)

-- 
Keith Lofstrom          [log in to unmask]         Voice (503)-520-1993

ATOM RSS1 RSS2