top of page

Legacy Members

Public·72 OGs
Mark Mishin
Mark Mishin

Molen - The Simulation


In the stop-signal paradigm, subjects perform a standard two-choice reaction task in which, occasionally and unpredictably, a stop-signal is presented requiring the inhibition of the response to the choice signal. The stop-signal paradigm has been successfully applied to assess the ability to inhibit under a wide range of experimental conditions and in various populations. The current study presents a set of evidence-based guidelines for using the stop-signal paradigm. The evidence was derived from a series of simulations aimed at (a) examining the effects of experimental design features on inhibition indices, and (b) testing the assumptions of the horse-race model that underlies the stop-signal paradigm. The simulations indicate that, under most conditions, the latency, but not variability, of response inhibition can be reliably estimated.




Molen - The Simulation



In this paper, we introduce the MOLEN ρμ-coded processor which comprises hardwired and microcoded reconfigurable units. At the expense of three new instructions, the proposed mechanisms allow instructions, entire pieces of code, or their combination to execute in a reconfigurable manner. The reconfiguration of the hardware and the execution on the reconfigured hardware are performed by ρ-microcode (an extension of the classical microcode to allow reconfiguration capabilities). We include fixed and pageable microcode hardware features to extend the flexibility and improve the performance. The scheme allows partial reconfiguration and includes caching mechanisms for non-frequently used reconfiguration and execution microcode. Using simulations, we establish the performance potential of the proposed processor assuming the JPEG and MPEG-2 benchmarks, the ALTERA APEX20K boards for the implementation, and a hardwired superscalar processor. After implementation, cycle time estimations and normalization, our simulations indicate that the execution cycles of the superscalar machine can be reduced by 30% for the JPEG benchmark and by 32% for the MPEG-2 benchmark using the proposed processor organization.


The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.


In the following we first explain the method in more detail: we formulate the model, outline parameter estimation, and indicate how the optimal number of basis functions and the optimal number of waveforms can be obtained by means of statistical model selection. Second, we report a simulation study on the characteristics of the method. Third, we illustrate the method with an analysis of empirical data obtained in a choice reaction time (CRT) experiment. Finally, we discuss advantages and limitations and provide some extensions.


Aravena, S., Snellings, P., Tijms, J., & van der Molen, M. W. (2013). A lab-controlled simulation of a letter-speech sound binding deficit in dyslexia. Journal of Experimental Child Psychology, 115, 591-707. PDF


In order to verify the 2DH hydrodynamics of XBeach when forced by directionally-spreadshort waves, a simulation is set up to compare model results to field measurements. In thiscase the DELILAH field experiment at Duck, North Carolina is selected as a suitable testlocation. The period that is modeled is October 13th 1990, which was a stormy day, between16:00 and 17:00 hours.For more info, check:


Climatology of the surface fluxes in YOGA-2012, compared with observations: (left) Qnet and G and (right) H and E. Observations are depicted with black solid lines and gray shading to indicate uncertainty. Blue dashed and solid marked lines denote YOGA and YOGA-HR, respectively; dotted lines denote RACMO. Note that by construction Qnet is exactly equal in all simulations.


Results are presented of two large-eddy simulation (LES) runs of the entire year 2012 centered at the Cabauw observational supersite in the Netherlands. The LES is coupled to a regional weather model that provides the large-scale information. The simulations provide three-dimensional continuous time series of LES-generated turbulence and clouds, which can be compared in detail to the extensive observational dataset of Cabauw. The LES dataset is available from the authors on request.


Examples of such studies are the large-eddy simulations of the Barbados Oceanographic and Meteorological Experiment (BOMEX; Siebesma et al. 2003) based on measurements by Holland and Rasmusson (1973), simulations of the Beaufort and Arctic Seas Experiment (BASE; Curry et al. 1997; Kosović and Curry 2000), or the GEWEX Atmospheric Boundary Layer Studies (GABLS; Holtslag et al. 2013) that has focused on the stable boundary layer. Accordingly, many theories and parameterizations have been tested against such datasets (e.g., Siebesma and Cuijpers 1995; Cheng et al. 2002; Heus and Jonker 2008). However, downsides to such an approach have also been identified (e.g., Neggers et al. 2012). First, these specific cases might not form a representative dataset to base models on. Second, the cases might not include those situations that are actually most troublesome for the models.


Until recently, such long time datasets were unattainable in the realm of high-resolution large-eddy simulation (LES) modeling. The excessive costs related to LES runs precluded these models from covering large periods of time. However, recent computational developments now allow the first attempts to bridge that gap. Although computational costs still impose some severe limitations, we present two LES datasets that cover an entire year (2012) in a single, continuous simulation situated around the Cabauw supersite, based on the input of the regional weather model available in the KNMI parameterization test bed. One LES run was performed for its relatively large domain (25 km 25 km), and the other was performed for its relatively high resolution (25 m). Such LES studies possess the advantages of a numerical case study, but mitigate the disadvantages relating to the representativeness of the simulations.


The exponential increase in computational resources is now taken for granted. However, this growth is increasingly dependent on the addition of extra processor cores and less on improved processor speed. The consequence is that parallel computing (i.e., distribution of the work among multiple processor cores) has become the standard. Consequently, the number of computational cells used in simulations has increased in order to fully utilize the extra available cores. Whereas this has naturally led to steady improvements in domain and resolution, long time simulations have remained difficult as the time span cannot be computed in parallel.


GALES is able to employ multiple GPUs to perform large-domain runs (Schalkwijk et al. 2015), but can also significantly improve time-to-solution performance (i.e., the time one has to wait for a given simulation to finish) for smaller simulations that reside on one GPU only. As a result, GALES can perform much longer simulations than previously feasible. This allowed us to perform a continuous simulation of 1 yr of boundary layer evolution around the Cabauw observational supersite, the Netherlands. We will refer to this run as YOGA-2012 (year of GALES, 2012) or simply YOGA.


In the simulations performed here, the large-scale motions are accounted for by coupling the simulation to the Dutch operational weather and climate forecasting model Regional Atmospheric Climate Model, version 2.1 (RACMO). The atmospheric dynamics in RACMO are based on the High Resolution Limited Area Model (HIRLAM); the physical processes are adopted from the ECMWF model. RACMO is extensively described in van Meijgaard et al. (2008).


Last, no GPU-resident full radiation scheme is available yet, and the coupling to a CPU-based radiation scheme would slow the computation down up to a point where a year-long simulation becomes infeasible. Therefore, we apply the RACMO radiative forcings [RACMO employs the GCM version of the Rapid Radiative Transfer Model (RRTMG) radiation module] to GALES. An additional advantage of this method is that this improves the comparability between the regional model and GALES; the disadvantage is that the radiative tendencies applied to GALES are detached from its actual thermodynamic state. Whereas the latter effect is anticipated to be small for clear air situations, it might cause a bias in cloudy cases. For instance, the cloud-top radiative cooling tendency that RACMO has may be applied to cloud-free regions in GALES if RACMO has a higher cloud top. Likewise, if GALES forms clouds that are absent in RACMO, no consistent radiative cooling is applied to the cloud field. Since both effects are anticipated to lead to cloud breakup in GALES, they might result in a net bias of reduced cloudiness. 041b061a72


About

Official community members pre 2023

OGs

bottom of page