View Full Version : MindModeling@Home

28th February 2014, 08:08 PM
I would like to see this project added to the list. Its been running since 2008 but they never dropped the Beta from their website for some reason. I have been running it off and on for the last month and it looks like more of the Crunching@EVGA team might pick it up.

MindModeling@Home (Beta) is a research project that uses volunteer computing for the advancement of cognitive science. The research focuses on utilizing computational cognitive process modeling to better understand the human mind. We need your help to improve on the scientific foundations that explain the mechanisms and processes that enable and moderate human performance and learning. Please join us in our efforts! MindModeling@home is not for profit.
MindModeling@Home (Beta) is based in Dayton, OH at the University of Dayton Research Institute (http://www.udri.udayton.edu) and Wright State University (http://www.wright.edu)

I think this one is definitely in the Biological and Medical category.

1st March 2014, 12:10 AM
Until we rollout a new interface that links our jobs with overarching project descriptions, here is a bit of info on our current projects (Note: some have been explained previously in other threads):

N-2 Repetition
When humans switch among tasks, they are temporarily less proficient in returning to a task after a distraction. A person performing a task (A) will switch to another task (B) and then back to the first (A), and he performs at a lower level than he did originally, or even than the level he reached on task B. This is referred to an N-2 repetition cost ('N' being any current task). In order to understand this phenomenon, our project aims to match human performance data with a cognitive model, varying things like the scale of inhibition, decay of inhibition over time, and base-level learning. The goal, as with all cognitive modeling, is to gain greater understanding of the inner workings of the human mind by reproducing human results artificially.

The adaptive control of eye movements in reading
We are interested in understanding how people move their eyes while reading text, as a function of both their own individual cognitive constraints and reading goals. We do so in the framework of bounded optimal control -- we make the assumption that people are attempting to adjust their behavior to optimize or maximize some payoff under the joint constraints of their individual limitations and tasks.
Under this assumption, we ask two questions: first, what cognitive constraints are necessary for certain properties of reading behavior to be near-optimal? That is, why do people choose the eye movement strategies they do in the service of reading? And second, what are the sequences of information processing actions that underly the behavioral strategies we see?

PFS model
This is a model that tries to detect whether there is a visual target (for example, a red X) present among a few non-targets (for example, green Xs and red Os). The model can move visual attention and its eyes around to decide whether the target is there or not, and then makes a decision. MindModeling allows us to search the parameter space to answer questions like 'how sure should the model be about the display before it moves its eyes?'

Integrated Learning Models (ILM) is a computational framework integrating multiple learning and decision mechanisms that are commonly found in psychological literature. At the core of the framework are Associative Learning, Reinforcement Learning, and Chunk Learning mechanisms.

Model of successive and simultaneous task
Models of successive and simultaneous vigilance tasks along with obtaining and correlating measures of cerebral blood flow.

I will also add that they appear to have gone with the Credit New system with the last server update. With the way they roll out their work, this creates a very wide difference in credit at the beginning vs. at the end when the credits adjust themselves. This is a relatively new subject at the project and is already beginning to cause some disgruntled attitudes.


I am still researching more details before presenting my official opinion. I just thought this info may help others in their decision.

1st March 2014, 01:59 AM
Hey Coleslaw, thanks for chiming in and providing a link that I forgot.

Yeah, I'm not real thrilled with the Credit New part of the project. However I am going to run it for a bit. I have goal I want to reach before I decide if I am going to keep it as a full time project or just something I will do every now and then.

But I am trying to get my team to join me in running it for a bit so we can make a run up the team ranks just for fun.

5th March 2014, 04:51 PM
I emailed the Admin Tom over at MM and here is the response I got from a few questions.

Tom, MindModeling has recently been brought up in the DC-Vault forums as possibly a project to be added. I had a few questions before giving my opinions in the forum so that it could add more for discussion.


1. Is the project truly BETA or is it just some of the apps somehow justified as BETA? Meaning, are you going to go back over the same work with a production version or do you use the term BETA loosely because it is an unproven science? If that is the case, why bother saying BETA when a lot of science is technically Alpha or BETA in all rights.

The Beta term reflects more the fact that the infrastructure that runs all the science experiments are changing and under development. The science conducted on the site is NOT beta -- the results are going directly to scientific journals and conference proceedings. We have been internally debating about when to officially state that our system is 'production', but I can assure you the science is already 'in production' status.

2. Are you interested in the extra potential processing power that might be added with the extra attention from the Vault? Some projects are content with the resources they have and don't want to push getting more if they can't supply the work or their hardware can't sustain the extra demand.

We currently have a very large backlog of jobs from which new workunits are generated. Extra computation resources and processing power is desired and would be GREATLY appreciated.

3. In regards to question 2, do you have periods of time that you don't have work to send? I know a few years back, there was no work at all for long periods. However, the last year has seemed somewhat consistent from what little I have checked in.

A few years back we completely restructured and migrated our system to new servers in order to accommodate the growing parallel processing requirements of the MindModeling project. The number of modelers using the system and the number of simulations requested of the system has been constantly increasing for the last couple years. Our work does fluctuate periodically but on averages has work in the system most of the time.

4. What will it take to get the project out of BETA status?

We have been asking ourselves the same question...

Thanks for your message and support of MindModeling research.
Tom and Jack

6th March 2014, 08:13 AM
After reading that it appears to me that MM is not a beta project at all. Beta only applies to them when they come up with a new application to run for different aspects of their research. Which is true for all the long term projects that are already on the Vaults list.

The project itself has been around, and running applications, for years. They should really stop over thinking the beta aspect and drop it from the project name.

6th March 2014, 12:58 PM
Khali, I think you will find multiple project admins using the logic that Tom did. Some look at the evolving project as the Beta/Alpha title. I find that silly because I don't know any website/company/project/etc... that isn't always trying to evolve. So, that in essence would be a constant state of Beta. I understand maybe the first year, but eventually you really need to drop that crutch and move on to production status. Especially if you are putting out production work units. But that is a completely different issue to be discussed at those projects. I only brought it up in case it mattered to anyone else for inclusion. I'm one of those that prefer that Vault projects not be in an Alpha/Beta status, but have found that many projects take refuge in it to justify their failings or shortcomings. It is always easier to tell people "remember this project is still Beta..." than it is to actually own up to the failing or problem at hand. That takes away any accountability.

6th March 2014, 07:20 PM
MM is also working on fixing their Credit New problems. Here is a link to the thread already discussing. http://mindmodeling.org/forum_thread.php?id=570#2791

17th March 2014, 05:57 PM
I sent a PM to Tom over at Mind Modeling with some ideas to improve the project. Namely badges and dropping the Beta tag from the project name. Here is his reply.

Hi Khali,

Thanks for both suggestions! Your participation and enthusiasm for our project has been incredible! Maybe we need an a special "Ideas" or "Participation" badge for people like you who contribute more than just their processing power to the project ;)

In any case, I totally agree that a badge system is sorely needed for our project. BOINC started natively supporting badges in their source tree several months ago, and I really latched onto the idea. It's definitely on my radar, and now especially so, if people are indeed limiting their resources to projects that have a badge system.

More near-term however, is to push out a new web interface that more accurately describes our project, the current running work, and the results of volunteer contributions. I suspect with the new interface we'll drop the "Beta" tag, and maybe only assign the beta status to new apps like you suggested.

Again, thanks for taking the time to contribute your ideas to the project; it definitely moves us in the right direction :)


22nd March 2014, 02:08 PM
Crunching@EVGA has been running this as a team for the last several weeks with heavy participation in the last week. We found a few issues with Linux and some missing libraries our users did not have for 32 bit applications. Once those were in place MM ran fine on Linux. There is one set of applications with a issue, ACT-R. The admins/devs are aware of it and are working on a fix.

Other than the Beta tag, which is going to be dropped from the project as a whole and only be used for new applications, I think MM is ready for the vault. Any one else have anything to report on MM?

30th March 2014, 01:36 PM
A heads up for everyone: the new Native Pypy WUs are mostly erroring out or becoming "stuck". Looks like this project might need a bit more maturing...

30th March 2014, 06:36 PM
New applications with issues are nothing unusual to any project. I just went through a bunch of GPU Grid tasks that got stuck and erred out every time I suspended the project to play a game. I had a similar issue with Rosetta not log ago. Should those two projects be removed from the Vault until they mature a bit?

31st March 2014, 02:01 PM
Mainly wanted to give people a heads up so they could check their machines for bad WUs. But really, do we want to add a new project that is having multiple WU issues or wait until things get stabilized a bit?

31st March 2014, 07:29 PM
To be fair, POEM has had worse problems getting their GPU app to work than MM has had with their apps. GPUGrid has also had several ups and downs over the last several months. So, I am not seeing why the recent oops justifies the concern recently. Every project as mentioned above has had a few bad batches. However, I would agree that holding off for a while longer to be good as MM has discussed making changes to their site which could impact stability of the project for a bit.


I would also like to point out that one of the very popular and long standing projects are having much worse problems than MM right now. Have a read at this thread. http://climateapps2.oerc.ox.ac.uk/cpdnboinc/forum_thread.php?id=7735&nowrap=true#48545 Major problems exporting stats for over a month and lack of work. The only good news is that they are at least communicating with the volunteers.

1st April 2014, 12:33 PM
My thought is that we shouldn't add a NEW project with WU problems. Wait until it's running smoothly.

Have a read at this thread. http://climateapps2.oerc.ox.ac.uk/cpdnboinc/forum_thread.php?id=7735&nowrap=true#48545 Major problems exporting stats for over a month and lack of work. The only good news is that they are at least communicating with the volunteers.
Sounds like the credit problem at climateprediction is fixed and credits are now up to date. It's hard to name a project that hasn't had some problems, but as you suggest in your other thread: NEW projects should be allowed time to mature before addition. I can't remember any past projects that were added while they were having problems.

1st April 2014, 12:53 PM
CPDN's stats aren't correct on BOINCStats or Free-DC yet. They haven't been for probably a month. Looks to me that it is still a problem. My points on the account page at CPDN looks correct, but then again I hadn't monitored the points enough to really verify its accuracy. I only found out about the issue after my curiosity on why my point progression hadn't changed on BOINCStats. But, that is really something to go over in another thread. My point here was that we still have major "reliable" projects with a few hiccups. I will concede projects should not be added while having major issues. But then again, who knows. It might be resolved by the time we make a decision on adding anyways. :D

12th December 2014, 12:56 PM
Just an update, Tom has stopped replying to us about the stats export issue. He also has not responded to a public forum request. Until he can get the stats issue resolved, we will not be adding this project.

23rd January 2015, 11:58 AM
Latest update, Tom has moved on to greener pastures and a few new members have been added to the MM team. That explains Tom's silence. We have been in touch with the new admin over there. However, it seems the majority of our admins here feel it may be best to hold off on adding MM until a later time.

Reasons for holding off:
1. Wait to see how well the new staff communicate and work on issues
2. The project runs out of work way too quickly and often.
3. Stats page keeps losing permissions to access. (hopefully this was recently resolved)

22nd May 2015, 03:10 PM
Staff replies are so-so. Projects seems stable and has a new facelift. Work unit availability has not changed much.