ALU

Social Sciences

For artists, success really does depend on who, not what, you know

Research finds that early access to prestige galleries turbocharges later success. Samantha Page reports. Original Link

Slot machines flash and ring to prompt more bets

Canadian researchers find casino noises make gamblers less risk averse. Samantha Page reports. Original Link

Men and women think leaders should have “masculine” traits

Characteristics such as kindness are nice, but are not considered essential in business. Samantha Page reports. Original Link

Football shock! Referees turn out to be clever people

Study finds soccer officials look at the big picture rather than the details. Andrew Patterson reports. Original Link

Girls and boys perform the same in STEM subjects

A new study puts paid to old attitudes to gender aptitude for science. Rebekah Harry reports. Original Link

You are what you meat

Two studies find links between social status, psychological make-up, and food choices. Andrew Masterson reports. Original Link

Even scientists jump to conclusions – and that’s a problem

The latest in a raft of experiments suggests a predicted “train wreck” in social sciences is under way. Paul Biegler reports. Original Link

Group loyalty is inherent, not learned

The simple status of belonging is enough to trigger defence and aggression. Andrew Patterson reports. Original Link

Dateless and dreaming: people on online romance sites try to punch above their weight

A study reveals that for single people, hope springs eternal – and sometimes pays off. Natalie Parletta reports. Original Link

Extremism and curdled milk are more similar than you might imagine

Physicists find that internet hate groups can be described using a theory that predicts the formation of gels. Phil Dooley reports. Original Link

Science alone can’t save us, say scientists

Physical scientists need the social sciences if they’re going to solve some of the world’s biggest problems, according to two leading Australian researchers.

Speaking at the Australian launch of a new journal that aims to integrate social sciences in research, Veena Sahajwalla of the University of NSW in Sydney and Rebekah Brown from the Monash Sustainable Development Institute in Melbourne argued that economics, social sciences and physical sciences should be combined to achieve real, practical outcomes.

“We need to shift the way we think, act and organise, which is only possible by bringing together expertise from across the physical sciences, social sciences, humanities and the arts – which are historically significant divides in universities,” said Brown.

Brown is studying whether green infrastructure can provide clean water, flood protection and sanitisation in 24 slums across Indonesia and Fiji.

“The project works at the interface of green infrastructure and preventative medicine and spans medicine, science, engineering, art design & architecture, and business & economics,” she says.

“We’re trying to find out whether green infrastructure can have a material impact on people’s gastro-intestinal health.”

Veena Sahajwalla, the inventor of ‘green steel’, said she conducts all her research with the end user in mind. Her green steel technology uses waste tyres to replace some of the coal in steel-making, and offers an affordable and sustainable product to manufacturing companies.

“Manufacturing uses a lot of energy, so if you can find a solution that reduces the carbon footprint, and does it in a way that’s affordable, then it makes green manufacturing feasible and affordable in any part of the world,” said Sahajwalla.

Her research is now being used in Bangladesh to reduce coal consumption and lower the carbon footprint.

At the journal launch, held at the University of Melbourne on 17 July, Sahajwalla and Brown said they design their research with consideration of human behaviour in mind. The new journal, Nature Sustainability, will publish cross-disciplinary research like theirs.

“It’s great that we now have a vehicle that allows us to publish this kind of work,” says Sahajwalla.

Nature Sustainability also gives us the chance to publish in a journal that has a higher standard, which I think is very important.”

“The social sciences are part of the research design in Nature Sustainability,” says Monica Contestabile, the chief editor of the new journal, which began publication in January of this year. The journal joins dozens of others in various fields published under the aegis of Nature, one of the world’s most prestigious scientific journals.

“As a publisher we didn’t want to lag behind. Nature has always been interested in the interface between basic science and science that can more directly impact society.”

Original Link

New evidence shows Stanford Prison Experiment conclusions “untenable”

One of the most famous psychology experiments of all time – the Stanford Prison Experiment – was deeply flawed and its findings should be rejected.

That’s the conclusion reached by researchers led by Alex Haslam from the University of Queensland, in a paper currently awaiting peer review on the preprint site PsyArXiv.

The Stanford Prison Experiment (SPE) has long been a standard inclusion in psychology textbooks, was the subject of films and documentaries, and, perhaps most significantly, is regularly cited as a framework through which to understand brutality and conflict.

All of this is remarkable, if only for the reason that the experiment was never formally written up and published in a peer-reviewed journal. Now, however, Haslam and his colleagues have effectively torpedoed its veracity, thanks to an analysis of recently released tape-recordings of conversations between the experimenters and the participants.

The SPE took place in August 1971 when a team led by Philip Zimbardo, a psychologist at Stanford University in the US, recruited 24 young males to take part in a study of prison life. The volunteers were randomly assigned roles as either guards or prisoners, and then – the story goes – left to their own devices for a fortnight in a custom-built prison in the basement of a building on campus.

Anyone familiar with psych 101 knows what happened next: the guards became increasingly brutal and violent towards the prisoners – who themselves became increasingly distressed – and the whole shebang had to be shut down after just six days.

The conclusion Zimbardo drew from the experience was sensational – and, over subsequent years, repeatedly sensationalised. To quote Haslam’s summation: “when placed in toxic situations, good people will inevitably turn bad”.

Having taken on the veneer of incontestable truth, the conclusion from the SPE has ever since been used repeatedly to explain brutality, mainly because it reduces the culpability of the perpetrators, and their leaders. If an environment is sufficiently toxic, the argument runs, then oppression and violence will emerge “naturally”.

In one important sense, the argument proceeds, the violent guard, the massacre participant, or the death squad member is not wholly responsible for his or her actions, but is instead compelled by the exigencies of role and situation.

But the new evidence, especially when combined with existing first-person accounts of the events that took place in the Stanford basement, shows that the SPE does not, and cannot, support such an idea.

Recommended

The science of good and evil

“We believe that the new evidence … makes it abundantly clear that our understanding of brutality in general and the specific narrative of the SPE must be rewritten,” Haslam and colleagues conclude.

“The totality of evidence indicates that far from slipping naturally into their assigned roles, some Guards actively resisted and were consequently subjected to intense efforts from the Experimenters to persuade them to conform to group norms for the purpose of achieving a shared and admirable group goal.”

In other words, taped conversations give the lie to the assertion that the volunteers assigned the role of guard somehow inevitably morphed into brutes.

Instead, they were encouraged by Zimbardo and his team to behave badly. Haslam and colleagues call this process “identity leadership”. They suggest that the volunteer guards were misled about their purpose, and in fact thought they were helping the experimenters research prisoner behaviour.

Despite this, however, things didn’t always go to plan. Citing earlier research, Haslam and colleagues report that “it is clear that many participants did not conform to role. Many Guards refused to be brutal and many prisoners continued to resist to the end of the study.”

Among other sources, they quote Carl Prescott, a real-life parolee enlisted by Zimbardo to act as an advisor. In 2005 Prescott talked to The Stanford Daily newspaper, denouncing stories concerning the SPE as lies.

Zimbardo and colleagues, he said, told the guards to inflict particular punishments and provided the implements (“bags, chains, buckets”) to enable them. Prescott also noted that the experimenters had decided on the study’s key findings before it commenced, and, therefore, the volunteer guards thought brutality was not a chance occurrence but a ground rule.

The primary piece of new evidence unearthed by Haslam’s team is a tape-recording made while the experiment was underway, between researcher David Jaffe, playing the role of prison warden, and a volunteer guard, John Mark.

“The evidence here is not at all oblique or opaque,” the researchers write. “The tape, we think, speaks largely for itself.”

Backing up this claim, the paper contains a full transcript of the conversation. Haslam also released the recording on Twitter, asking people to respond to it by deciding whether it constituted evidence of a guard conforming naturally to a role, an experimenter indulging in identity leadership, or neither.

The recording makes it very clear that Mark is extremely uncomfortable with what is expected of him. “I’m not too tough,” he says, and indicates he would much rather let uncooperative prisoners “cool off” than act against them.

Jaffe counters by instructing him to be “tough”, and implying that by not doing so he is letting down the experimenters.

“We noticed this morning that you weren’t really lending a hand,” says the warden, “but we really want to get you active and involved because the Guards have to know that every Guard is going to be what we call a tough Guard.”

There is more – much more – in the same vein.

“The meeting between Warden and Guard provides clear evidence not just that some Guards failed to conform ‘naturally’ to their assigned roles but that some actively resisted pressure to act harshly toward the Prisoners,” Haslam and colleagues write.

Twitter users, for what it’s worth, agreed: 87% of the 228 people who responded to the researchers’ tweet thought the tape revealed that identity leadership, rather than natural behaviour, was in play.

Haslam and colleagues say the new evidence shows “that the role account of brutality in the SPE is untenable”.

The notion that people placed in toxic situations naturally become brutal – and the free passes the argument has provided to both perpetrators and their commanders – needs to be ditched.

“Brutality does not inhere simply in the nature of the perpetrators and neither does it inhere the demands of the situation,” the researchers conclude.

“An understanding of how brutality is produced additionally requires an analysis of leadership, of how leaders persuade, and of how they are able to portray toxic behaviour as worthy action in defence of a noble collective cause.”

Original Link

Gaming or gambling: study shows almost half of loot boxes in video games constitute gambling

The Australian Senate today passed a motion to investigate whether purchasable random rewards in video games (known colloquially as loot boxes) constitute a form of gambling and whether they are appropriate for younger players.

Our recent paper, which was cited in the senate motion, explores exactly these questions.

We found that the loot boxes in almost half (45%) of the 22 games we analysed met the criteria to be considered psychologically similar to gambling, even though they are rated as appropriate for adolescent players under the age of consent for gambling.

What is a loot box?

Loot boxes are digital containers of randomised rewards, and are available in a number of video games. The box may contain rewards ranging from cosmetic items which alter the appearance of in-game characters to functional items that increase the player’s power in some way (for example a gun that fires faster or does more damage).

In our research, we sought to answer two questions: are loot boxes like gambling and, if so, what should be done about it?

First up, we want to clarify that video games are not evil. Games companies are not evil. Making money from video games is not evil. And playing video games with loot boxes is unlikely to result in young people flocking in great numbers to casinos.

However, simultaneously, it may also be true that loot boxes represent a troubling and potentially inappropriate monetisation strategy, with the potential to cause short and long-term harm to some players. Our intent is to educate readers about loot box mechanisms, and promote a reasoned, evidence-based discussion about ethical practice in video games.

Loot box rewards may be highly desirable or valuable (for example, a particularly valuable cosmetic item or very powerful weapon), or virtually useless and undesirable (items referred to as “vender trash”). Most importantly, the contents of the box are determined by chance. Some (but not all) loot boxes are purchasable for real money. In some cases, items earned from a loot box can also be “cashed out” for real world money.

The gambling problem

The problem is that spending real money on a chance outcome that results in some people “winning” and others “losing” is fundamental to gambling activities. Thus, we analysed the loot box features in 22 console and PC games released in 2016 and 2017, with a view to understanding how psychologically similar they were to gambling.

We used five criteria to distinguish gambling from other risk-taking activities. These have been developed by Nottingham Trent University psychologist Mark Griffiths in his work on behavioural addictions and gambling disorders. To be considered psychologically similar to gambling, loot boxes must involve:

  • an exchange of money or valuable goods takes place
  • an unknown future event determines the exchange
  • chance at least partly determining the outcome
  • non-participation avoiding incurring losses
  • winners gaining at the sole expense of losers.

We took a reasonably strict interpretation of the final criterion; assuming that people only “won” if they gained some form of in-game competitive advantage (for example more powerful weapons). Arguably, this approach ignores the subjective value that might be created by the scarcity of, or player preference for, certain cosmetic items. However, it appeared to us to most closely resemble Griffiths’ intent.

Loot boxes in just under half of the games (45%) met all five of Griffiths’ criteria and, thus, could be considered psychologically akin to gambling.

All of the loot boxes operated on a variable ratio reinforcement schedule – a technical term for a reward given to a person on average every so many times they engage in a particular behaviour. This type of reward schedule results in people quickly learning new behaviours (for example buying loot boxes) and repeating them often in the hope of receiving a reward. The strategy is effective because the next time a box is opened it might be the “big win”.

Perhaps most concerning was the fact that at least five of the games had mechanisms available to on-sell virtual items, allowing players to cash out their winnings (though four of these five had terms and conditions explicitly prohibiting this).

The ability to cash out winnings is something that many consider a legal requirement for an activity to be considered gambling. Although the legality of loot boxes is a question for individual regulators and governments, exposure to mechanisms which closely mimic gambling in a psychological sense is concerning to us, especially since all of the games we examined were rated as appropriate for those under the age of consent for gambling.

The short and long-term consequences of engaging with these mechanisms are unknown. Plausibly, short-term consequences may include overspending on loot boxes. The potential for long-term consequences also concerns us because males (a particularly large group within gamers) exposed to gambling when young are particularly at risk of developing problematic gaming behaviours.

What to do about it

There is cause for hope. Electronic Arts (one of the largest game studios in the world) has recently announced the removal of loot boxes from upcoming titles. This suggests the games industry is taking consumer and expert feedback seriously, and may take steps to self-regulate.

In our view, this is the optimal solution, given the diverse policy landscapes across the countries in which video games are sold.

Where industry is not willing to self-regulate, and loot boxes are most similar to gambling, regulators may need to consider additional steps, although this should be undertaken selectively. Belgium and the Netherlands have declared at least some loot boxes to be illegal, while the US and UK have decided that they are not a form of gambling. As noted above, the Australian Senate unanimously supported a vote on the 28th of June to refer an inquiry into the legality of loot boxes in video games to the Environment and Communications References Committee.

Most importantly, we recommend that loot box mechanics should be added to content warnings to give users and parents the information they need to properly assess whether particular games are appropriate for themselves or their children. Ensuring that users can make well informed decisions about the appropriateness of content remains one of the strongest consumer defences.

The ConversationWe hope that this work will form the basis for a well-reasoned, evidence-based policy discussion about ethical and sustainable practices in video games. Our intent is not to stigmatise games or gamers, but to spark a discussion about what mechanisms are and are not appropriate for particular audiences, games and the industry more broadly.

This article was originally published on The Conversation and is republished here with permission. Read the original article.

Original Link

Football violence driven by tribal loyalties

The group stage of the 2018 FIFA World Cup is drawing to a close and thus far, reports suggest the atmosphere in and around match venues has been largely peaceful. However, with many of the relative minnows of world football cast aside, high-stakes grudge matches beckon in the forthcoming knock-out rounds, all to be played out on a politically volatile stage.

As a result, concerns remain over the looming threat of hooliganism, a topic which has been widely discussed since Russia’s contentious bid for World Cup hosting duties was made. These fears may be compounded by a recent study which connects the tribal origins of football violence with religious and political extremism.

Football violence has a long, dark history, predating even the inaugural FIFA World Cup in 1930. The phenomenon is thought to have its origins in thirteenth century England, when civil wars between neighbouring settlements were disguised as leisure activities involving an object in the shape of a ball. As early as 1314, the Mayor of London proclaimed thus:

“And whereas there is a great uproar in the City through certain tumults arising from the striking of great footballs in the field of the public – from which many evils perchance may arise – which may God forbid – we do command and do forbid, on the King’s behalf, upon pain of imprisonment, that such games shall not be practised henceforth within this city.”

Unfortunately, this and other similar appeals that followed over the centuries couldn’t loosen the grip of violence on a chaotic pastime. While regulation of the sport in the nineteenth century ultimately brought about a semblance of order on the pitch – first, by helpfully defining the pitch itself – disorder off it has continued to make headlines, reaching a peak in the 1970s and 1980s, and re-emerging in recent years.

Now, research from the University of Oxford may shed new light on what motivates supporter scraps. Earlier studies have considered hooliganism “an expression of social maladjustment”, stemming from exposure to domestic violence or dysfunctional behaviour in childhood.

However, the new study, led by anthropologist Martha Newson, suggests off-field flare-ups are more socially driven, citing the desire to reinforce bonds with, and defend or protect, allied fans, adding that these factors may also drive other forms of extremist behaviour.

The research, published in the journal Evolution & Human Behaviour, canvassed 465 fans, among them known hooligans, from the football hotbed of Brazil. The findings show that members of super-fan groups – the most extreme of which are often known and feared as ‘ultras’ – tend to exhibit their capacity for violence only within the supporter community, and do not carry this trait into their day-to-day lives.

“Our study shows that hooliganism is not a random behaviour,” explains Newson.

“Members of hooligan groups are not necessarily dysfunctional people outside of the football community; violent behaviour is almost entirely focused on those regarded as a threat – usually rival fans or sometimes the police.”

Newson says passion for football within a group of fans “instantly ups the ante”, because violence can result from the commitment fans have to their respective cliques. This is identified in the study as an example of a psychological construct known as “identity fusion”, which in extreme cases has been found to inspire acts of self-sacrifice for the sake of a group.

Another factor is the range of environments supporter factions find themselves in, such as situations where they can be subject to abuse from rival fans. As a result, superfan groups are considered by Newson to be “even more likely to be ‘on guard’ and battle-ready”.

While the study surveyed only Brazilian football fans, the authors believe the findings can be applied not only to supporters elsewhere and to other strains of sports-related violence, but also to religious and political extremists.

“The psychology underlying the fighting groups [is] essential for groups to succeed against each other for resources like food, territory and mates, and we see a legacy of this tribal psychology in modern fandom,” Newson adds.

The research does not suggest that cracking down on extreme supporter factions will necessarily be effective in curbing violence. It also indicates the use of hard-line policing, such as the use of tear gas or military force, is likely to be counterproductive, potentially sparking further violence by driving the most committed fans to intervene physically in defence of their fellow supporters.

Recommended

In soccer, heading the ball carries greatest risk of concussion

The findings reinforce the research team’s previous work, which aimed to understand the role of identity fusion in extreme behaviour through football violence. The authors believe that there is potential for clubs to take the intense social cohesion that drives off-field violence and channel it in more positive directions.

“As with all identify fusion driven behaviours, the violence comes from a positive desire to ‘protect’ the group,” says co-author Harvey Whitehouse.

“Understanding this might help us to tap in to this social bonding and use it for good. For example, we already see groups of fans setting up food banks or crowd-funding pages for chronically ill fans they don’t even know.

“We hope this study spurs an interest in reducing inter-group conflict through a deeper understanding of both the psychological and situational factors that drive it.”

Original Link

How to write a hit song

If you want to write a hit song then make sure it is happy, hits a party vibe, is not likely to relax listeners, and preferably is sung by a woman.

There. Simple. That’s the take-home message emanating from a study published in the journal Royal Society Open Science, in which a group of mathematicians analyse the success or otherwise of half a million songs released in the UK between 1985 and 2015.

Using a machine-learning algorithm known as random forests, and controlling for potentially complicating factors such as the fame of an artist, Myra Interiano and colleagues from the University of California Irvine in the US plumbed the depths of a couple of huge community-generated music databases and looked for trends developing over decades.

Recommended

Music really is a universal language

They also looked for successful songs and analysed their structures. Success, in this case, was, the researchers note, a “crude measure” that required only that the piece of music made it onto the UK official Top 100 Singles Chart in any year.

All of the songs in the database were classified and rated according to 12 variables, which included timbre, tonality, danceability, the gender of the singer, and whether the music could be described as conveying a mood that was relaxed, aggressive, sad, happy or likely to go down well at a party.

The overall results indicate that during the past three decades songwriters have become a rather morose and introspective bunch.

The researchers found that “‘happiness’ and ‘brightness’ experienced a downward trend, while ‘sadness’ increased in the last 30 or so years”.

This was consistent with an earlier study that found the use of “positive emotion” in lyrics had taken a dive, with recent songs more likely to focus on the self than on the concept of partnership or friends. The use of aggressive words, such as “hate” and “kill”, was also on the up.

Interiano and her colleagues also determined that the qualities of “relaxedness” and “danceability” had increased during the study period, while the percentage of male voices had dropped.

This last was particularly so when it came to analysing successful songs, which were “characterised by a larger percentage of female artists compared to all songs”.

Interiano and colleagues also determined – perhaps a little predictably – that charting songs are quite rare. Only 4% of the numbers listed on the databases enjoyed the achievement.

The researchers noted that the numbers that did crack the dizzying heights of the Top 100 singles chart were often “systematically” different from their less successful competitors. They summarised theese differences in half a dozen easy-to-remember points:

1. Successful songs are happier than average songs.

2. Successful songs have a brighter timbre than average songs.

3. Successful songs are less sad than average songs.

4. Successful songs are more party-like than average songs.

5. Successful songs are less relaxed than average songs.

6. Successful songs are more danceable than average songs.

In the first five categories, the study finds, the characteristics of charting songs run counter to the tendency towards sadness and negativity found in the remaining 96% of the tracks analysed. The researchers tentatively suggest that this could be partly because of the success of hit-packed compilation albums, or because older music consumers are introducing a level of “inertia” in the success of popular music.

The analysis also found that successful songs were likely to be less aggressive than run-of-the-mill numbers.

The fact that more successful songs were sung by women, the researchers note, was interesting in light of current debates surrounding the status of women in the music industry. These range from discussions of the sexualisation of female singers to the fact that female-led acts are under-represented at major festivals.

Original Link

Sorry Mr Spock: science and emotion are not only compatible, they’re inseparable

Here’s a view of science you might recognise as common, or at least see promoted:

Science is a purely objective pursuit. Words like “fact”, “proof”, “evidence” and “natural law” are the marks of the scientific method. This approach has no place for emotion, or any subjective aspects. Save it for the arts!

Let’s call this the “Mr Spock view” — since it is framed around the idea of a scientist as a dispassionate, hyper-rational observer of nature. (Mr Spock is a Star Trek character who famously prioritised logic above emotion.)

Almost every part of that view is wrong.

Star Trek’s Mr Spock was devoted to logic in place of emotion.

Star Trek’s Mr Spock was devoted to logic in place of emotion.

drbeachvacation/flickr

Straight off, the Mr Spock view is wildly out of step with the past 30 or so years of research in cognitive neuroscience, which has uncovered a very tight connection between reason and emotion.

Far from being hyper-rational, people who do not experience emotional reactions – known as anhedonics – struggle to make rational decisions, to prioritise, and to follow through on tasks.

Neuroscientist Marcel Kinsbourne describes the integration of reason and emotion in terms of an “activity cycle” – a coordinated effort to motivate and sustain an appropriate course of action and a readiness to change course as circumstances demand. This is exactly the kind of motivated rationality you want in a scientist, and real anhedonics are very bad at it.

This body of evidence tells us that rationality depends on emotion; but what reason is, and why it needs emotion to function effectively, remains a work in progress. And some of that work is philosophical.

Science begins in wonder

In his dialogue the Theaetetus, ancient Greek philosopher Plato has his teacher Socrates espouse the intellectual virtues of wonder.

I see, my dear Theaetetus, that Theodorus had a true insight into your nature when he said that you were a philosopher; for wonder is the feeling of a philosopher, and philosophy begins in wonder.

Since we wonder about things that are extraordinary, or new to our experience, wonder is crucial for both motivating inquiry and engendering the right degree of respect for nature. These two roles of wonder are still reflected in ordinary language.

Emotion and logic in materials research.

Compare the two phrases “I wonder whether…” and “It’s a wonder that…”. Wonder in the first sense implies ignorance and is extinguished by discovery. But wonder at the wonders of nature is not diminished by scientific understanding. Both senses of wonder are indispensable to the practice of science.

The scientific revolution of the 17th and 18th centuries did little to diminish the centrality of wonder as the principal motivator of scientific inquiry.

The French mathematician and philosopher René Descartes attached to wonder the cognitive role of fixing attention. Wonder is what keeps us from becoming goldfish, drawn to every passing – ooh! – shiny thing. Even for the arch “rationalist” Descartes, the suspicion was that reason needs a leg up from passion.

The 18th-century Scottish philosopher David Hume went even further, proclaiming that reason “is and ought to be the slave of the passions”.

Part of what Hume was on about is that it’s not clear how reason – if reason is just a tool for making valid inferences – could ever tell us what we should care about.

As Hume put it, reason gives us only the means, not the ends of inquiry or action.

It’s hardly controversial nowadays to say that that a sense of wonder underlies science. The Cosmos television series – created by the scientist Carl Sagan plays on this idea frequently, as do many television documentaries about science (notably those of David Attenborough, including Life on Earth and The Blue Planet).

But what about more negative affects of emotion? Do they have a place in science?

Wonder can ignite inquiry, but so too can emotions associated with doubt and the disequilibrium associated with not knowing something important. Breaking through epistemological, or knowledge barriers takes persistence and resilience. We might even call these emotional characteristics intellectual virtues, and those who posses them, virtuous.

Science produces wonder

The second point to make is that science is itself wonderful.

Each of us can trace our ancestry back billions of years through an unbroken line, merging as we move backwards with every other living thing towards a possible common origin. Atoms in our bodies, before being harvested from the Earth, were forged in the core of stars now extinct and strewn across space in supernova events.

Surely these are two of the greatest – and most wonderful – stories ever told.

The beginning of our story is written in the stars.

The beginning of our story is written in the stars.

bunsky/flickr

Science is subjective

Our last point – that the process of science is subjective – might sound odd, particularly if the terms “subjective” and “objective” are regarded as opposites. But they aren’t. “Subjective” pertains to the subject, and there is nothing necessarily un-objective about subjectivity.

Let’s look briefly at three instances where subjectivity plays a role in good science.

First, Thomas Kuhn, author of The Structure of Scientific Revolutions (one of the most cited works of the 20th century) notes that scientists preferentially apply values such as precision, plausibility, reproducibility and simplicity according to their individual judgement.

One scientist may value simplicity above accuracy (think scientific models); another, reproducibility above precision. Two scientists might, therefore, arrive at different conclusions given the same data, each in a rationally justifiable manner.

Second, the hypothetico-deductive method, in which scientists generate a hypothesis, make predictions and then test those predictions experimentally – thus allowing the hypothesis to be falsified – is one of the logical stalwarts of science.

But, as philosopher of science Karl Popper notes, we do not ask that the process of generating a hypothesis is itself rationally verifiable. The generation of a hypothesis might be an affective or imaginative process.

Third, inference to the best explanation is not just a passionless move in some deduction game. Einstein was not led to his theories of special and general relativity through attention to data alone, as if following some experimental breadcrumbs. His theories were the offspring of his imagination as much as anything else.

A human endeavour

Science is an endeavour that draws on all of our uniquely human abilities. It needs feelings and imagination as much as observation, analysis and logic. It needs appeals to common values to guide it.

So let’s get emotional about science. Not just to celebrate it, but because that’s how to do it properly.

This article was originally published on The Conversation and is republished here with permission. Read the original article.

Original Link

‘Common sense’ is not always based on scientific evidence

File 20180419 163995 ztaab2.jpg?ixlib=rb 1.1 The quest for scientific evidence can trace its roots back to the classic masters of rhetoric. AboutLife/Shutterstock

The term “evidence” has a fascinating linguistic and social history – and it’s a good reminder that even today the truth of scientific evidence depends on it being presented in a convincing way.

As recent climate change scepticism shows, the fortunes of scientific evidence can be swayed by something as fleeting as a tweet.

But what does it even mean to speak of “scientific evidence”?

The art of persuasion

History reveals that scientific forms of evidence have rarely, if ever, been detached from rhetoric. In fact, the very idea of evidence has its origins within the context of classical rhetoric, the art of persuasion.

Our modern term originates from the ancient Greek ἐνάργεια (enargeia), a rhetorical device whereby words were used to enhance the truth of a speech through constructing a vivid and evocative image of the things related.

Far from independent and objective, enargeia depended entirely on the abilities of the orator.

In the hands of an exceptional orator – such as the ancient Greek poet Homer – it could be deployed so effectively that listeners came to believe themselves eyewitnesses to what was being described.

Before the court

Aware of its utility to the law, the Roman statesman Marcus Tullius Cicero brought enargeia into forensic rhetoric during the 1st century BCE, translating it into Latin as evidentia.

For Roman orators such as Cicero and, in the 1st century AD, Marcus Fabius Quintilian, evidentia was particularly well suited to the courtroom.

Here it could be used to paint the scene of a grisly murder: The blood, the groans, the last breath of the dying victim. Recounting the scene of a murder in vivid language brought it immediately before the mind’s eye, affording it the quality of evidentia (“evidentness”) in the process.

Such detail was of paramount importance. The more detail the orator could furnish, the more likely it was that his account would convince the jury of its truth.

From its inception, then, enargeia/evidentia was a device that was used by one person to convince another about a particular reality that might not otherwise be evident. There was an art to it.

Scientific evidence

We can be forgiven for forgetting that the idea of scientific evidence originates in the art of rhetoric, for early modern scientists went to considerable lengths to disassociate the idea from its classical past.

Through their efforts, the meaning of evidence was shifted from a rhetorical device to denote something sufficiently self-evident that inferences could be drawn from it.

Adopting the English translation of evidentia from the common law in the 1660s, Robert Boyle (1627-1691), Robert Hooke (1635-1703) and other practitioners of the new science situated “evidence” as the end result of unbiased observation and experimentation.

Unlike classical evidentia, scientific “evidence” was objective because it spoke for itself. As the motto of the newly-minted Royal Society of Londonnullius in verba – stressed, its members were to “take no one’s word for it”.

Just like forensic evidentia, the truth of scientific evidence was based on its immediacy.

Hooke’s microscope, to give an example, permitted the viewer to witness first-hand the compound eye of the dronefly in such marvellous detail as to leave him or her without any doubt of its reality – a “see-for-yourself” mindset that was crucial to the success of science.

An illustration by Christopher Wren of the compound eye of a drone fly, contained in Robert Hooke’s book Micrographia: or Some Phyſiological Deſcriptions of Minute Bodies Made by Magnifying Glasses. With Observations and Inquiries Thereupon. British Library

Yet in practice, because most people were unable to peer through the eyepiece of a microscope, the evidence Hooke collected remained largely reliant on testimony.

Whether one accepted Hooke’s evidence for a previously unknown, microscopic world depended more on the painstakingly detailed illustrations and descriptions he gave in his 1665 Micrographia than the observations themselves.

Contrary to the Royal Society’s motto, it was not the things themselves but the way in which they were presented – and their presentation by a morally upstanding expert – that ultimately did most of the convincing.

The same holds true today. The invisible structures, processes and interactions that scientists train for years to observe remain unobservable to most people.

The temperature changes, sea level rises and acidification of the ocean that comprise some of the vast and complex evidence for climate change require, in many cases, expensive equipment, years of monitoring and specialists trained to interpret the data before climate change becomes evident.

Even when evident to scientists, this does not make climate change evidence evident to the average person.

Climate change sceptics

US president Donald Trump’s scepticism about climate change is a potent example of just how intertwined scientific evidence and rhetoric remain.

So far the Twitter Trump Archive has recorded 99 mentions of “global warming” and 32 mentions of “climate change” (both appear in some tweets) by @realDonaldTrump.

Situating his tweets as evidence against climate change, Trump poses rhetorical questions to his 50 million followers:

In marked contrast to the complex evidence for climate change, Trump positions his tweets as common sense evidence against it. In this, immediacy is on his side. Freezing weather is readily apparent to everyone, not just to scientists.

Trump’s followers are made direct witnesses to the truth of climate change by appeal to that which is most evident to them and thus, by implication, that which is the best evidence.

Even if a record cold and snow spell is not, in reality, evidence against climate change, its capacity to convince is greater because, unlike genuine evidence for climate change, it is both simple and immediate.

Evidence for climate change, on the other hand, requires trust in the scientific community, a trust that is meant to offset its lack of immediacy and which asks us to suspend our senses.

Trump’s tweets aim to delegitimise this trust, empowering his followers by telling them to trust the evidence of their own senses, their own expertise.

As scientific evidence has become increasingly complex, so too has the idea of “clear scientific evidence” become an oxymoron. If anything, Trump’s assault on climate change should serve as a reminder that making scientific evidence evident enough to convince the public is an art that needs to be embraced.

The ConversationScientific evidence can’t always be expected to speak for itself.

James A. T. Lancaster, UQ Research Fellow, The University of Queensland

This article was originally published on The Conversation and is republished here with permission. Read the original article.

Original Link

This week in science history: Unacknowledged, the first forensic fingerprinter dies

Henry Faulds, arguably the inventor of forensic fingerprinting.

Henry Faulds, arguably the inventor of forensic fingerprinting.

Wikiemedia Commons

From Sherlock Holmes to Kay Scarpetta, fictional sleuths who use forensic science to catch criminals have thrilled audiences for more than a century. There have even been reports of a so-called “CSI effect”, in which a popular series of television programs has supposedly influenced people’s real-life perceptions of scientific crime detection.

One of the most enduring tenets of crime-solving is the belief in the infallibility of fingerprints. But research published in 2017 by the American Association for the Advancement of Science has called into question the validity of this most venerable tool.

Even though law enforcement agencies around the world have used fingerprints as evidence to catch and convict criminals for more than 100 years, there has been some controversy over who discovered their use as a form of identification.

Among the most persuasive claimants for this honour is Henry Faulds, a doctor and missionary, born on June 1, 1843, in North Ayrshire, Scotland. After studying medicine, in 1873 he travelled to Japan, where he founded a hospital in Tokyo, taught at a local university and founded the Tokyo Institute for the Blind.

In the late 1870s he became interested in archaeology and made note of fingerprints impressed into pieces of ancient Japanese pottery. He expanded his studies into living examples of fingerprints, experimenting on his own fingers. He even tried to obliterate or change his whorls and loops using acid, and concluded that a person’s fingerprints were permanent and unique.

The story became complicated after Faulds shared his findings in a letter to the renowned scientist Charles Darwin, who shared it with his cousin, Francis Galton.

As recounted in a 2014 Gizmodo article, Faulds published a paper on fingerprints in 1880 in the journal Nature, suggesting that they could be used to catch criminals, along with ideas on how this could be done. Shortly afterwards, Sir William Herschel, a British magistrate working in India, published a letter in the same journal, in which he explained how he had been using fingerprints as a method of signature for years and was indeed the true inventor of the practice.

Recommended

History is rocket science to Amy Shira Teitel

Faulds returned to Britain in 1886 and offered his fingerprinting system to Scotland Yard, which turned him down.

Then in 1892 Galton published a paper, titled “Finger Prints”, asserting the uniqueness of fingerprints and suggesting a classification system for them. He also sided with Herschel, and thus the pair became known as the main innovators in fingerprint collection. [became known:

This triggered a series of claims and counter-claims between Faulds and Herschel that lasted until 1917, when Herschel conceded that Faulds had been the first to suggest a forensic use for fingerprints.

Faulds continued to work in London and later as a police surgeon in Staffordshire. He died March 24, 1930, reportedly still bitter at the lack of recognition he had received for his work.

Original Link

Languages get simpler when more people speak them

In trying to understand the cultural evolution of language, scientists have observed that small, isolated linguistic communities often develop languages that have complex structures, elaborate and opaque morphology, rich patterns of agreement, and many irregularities.

It has also been suggested that such features of languages require long interactions in small, close-knit societies to develop.

By contrast, languages with large communities of speakers, such as Mandarin or English, appear to be structurally simpler.

But an apparently opposite pattern appears in relation to non-structural properties, such as word usage: languages with many speakers tend to have larger vocabularies. English, for example, has grown rapidly and is estimated to have many hundreds of thousands of words, including those with highly specialised and technical meanings.

Despite their frequent structural complexity, languages spoken by small numbers of speakers are typically assumed to have smaller vocabularies.

An analysis of Polynesian languages indicates, moreover, that larger linguistic communities both create more new words and lose fewer existing words over time.

These contrasting patterns pose a challenge for theories based on the cultural evolution of language.

Why does the size of a population of speakers have opposite effects on vocabulary and grammar?

Through computer simulations, a team of cognitive scientists at Cornell University in the US has have shown that ease of learning may explain the paradox. The research suggests that language, as well as other aspects of culture, may become simpler as the world becomes more interconnected.

The study was published in the journal Proceedings of the Royal Society B.

“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” says co-author Morten Christiansen.

The researchers hypothesised that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” Christiansen explains.

Recommended

Language and tool-making might be connected

But learning a new grammatical innovation requires a lengthier process, and that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of that community.

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population,” he says.

Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to a new complex grammar rule, making it harder for it to survive.

This mechanism can explain why all sorts of complex cultural conventions emerge in small communities: for example, the slang that developed in the intimate jazz world of 1940s New York City.

The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected. Christiansen says: “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”

Not all hope is lost for those who want to maintain complex cultural traditions, however. “People can self-organise into smaller communities to counteract that drive toward simplification,” he says.

Original Link

Babies learn in bursts (just like Mum said)

Developmental psychologists and parents alike have long thought that the cognitive abilities of babies develop in bursts rather than in a predictably linear rate, but the idea been based largely conjecture and anecdote. Now, a new study presents hard data supporting the idea.

A team of researchers lead by psychologist Koraly Perez-Edgar from Pennsylvania State University has published a paper in the journal Child Development demonstrating that babies learn at a non-linear rate.

Their area of examination was the “A-not-B error” – or what is commonly known as object permanence: the point where infants realise that something still exists even if they can’t immediately perceive it. This was first described in 1954 by the pioneering Swiss child developmental psychologist Jean Piaget, who also invented the basic test used in this experiment.

Despite its seeming simplicity, avoiding A-not-B errors is correlated with increased brain activity over multiple sites, measured because improved memory-based tasks correlate with measurable increases in electroencephalograph (EEG) results.

So, the experiment was simple: take a bunch of six-month-olds (28, in this case) and bring them into the lab monthly to perform the Piaget test while having their brain activity mapped using an EEG.

The task the babies performed involved a researcher hiding a toy in one of two wells in a cardboard box set in front of the infant. If the toy was then successfully retrieved by the infant, showing that they remembered that the toy existed and where it was despite not being able to directly see it, the test was considered successful.

Recommended

Designer babies crawl closer

“How babies perform in this task tells us a lot about their development because it’s a coordination of multiple skills,” explains co-author Leigha MacNeill.

“They have to remember where the ball was moved, which is working memory. They have to know an object exists even though it’s out of sight, and they need to track objects moving in space from one place to another. All of this also required them to pay attention. So there’s a lot going on.”

The EEG took baseline readings while the babies watched spinning balls in a bingo wheel before measuring brain activity as the babies performed the A-not-B test.

The results mapped neatly as a sigmoid curve: flat at six months, with barely any of the children passing the test, and a gentle curve flattening again at 12 months with most of the children reaching the milestone. The experimenters noted that there was also a lot of variation in development both between the different babies and in individual children over different testing sessions.

In other words, science backs up your intuition. These results indicate that babies learn in punctuated bursts, not at a steady linear rate over time, and the timing of those bursts are as idiosyncratic as the children themselves.

Original Link

Three-minute thesis: Prosecuting psychological harms

Researcher

Paul McGorrery, Deakin University

PhD title

Criminalising psychological harms

Summary

“Although courts in Australia have increasingly recognised psychological harm as a compensable type of harm, no one in Australia has ever been prosecuted in criminal law for causing psychological harm to another person. This research aims to understand whether, and to what extent, the criminal law does and should criminalise the infliction of psychological harm by one individual to another. In order to answer that question, this research reviews criminal laws around Australia, the role and purposes of the criminal law in society, and definitions of psychological harm. Suggestions will then be made about the proper boundaries that might be placed around such an offence.”


The finals of the 2017 Asia-Pacific Three-Minute Thesis (3MT) competition, which challenges PhD students to communicate their research in a snappy three-minute presentation, were held on the 29 September at the University of Queensland, St Lucia Campus. Competitors came from 55 Universities from across Australia, New Zealand and North and South-East Asia.

The presentations were judged by distinguished figures in Australian science including Cosmos editor-in-chief Elizabeth Finkel.

Original Link

An anthropologist explains why we love holiday rituals and traditions

The mere thought of holiday traditions brings smiles to most people’s faces and elicits feelings of sweet anticipation and nostalgia. We can almost smell those candles, taste those special meals, hear those familiar songs in our minds.

Ritual marks some of the most important moments in our lives, from personal milestones like birthdays and weddings to seasonal celebrations like Thanksgiving and religious holidays like Christmas or Hanukkah. And the more important the moment, the fancier the ritual.

Holiday rituals are bursting with sensory pageantry. These (often quite literal) bells and whistles signal to all of our senses that this is no common occasion – it is one full of significance and meaning. Such sensory exuberance helps create lasting recollections of those occasions and marks them in our memory as special events worth cherishing.

Indeed, there are plenty of reasons to value family rituals. Research shows that they can provide various psychological benefits, helping us enjoy ourselves, connect with loved ones and take a respite from the daily grind.

An anxiety buffer

Everyday life is stressful and full of uncertainty. Having a special time of the year when we know exactly what to do, the way we’ve always done it, provides a comfortable sense of structure, control and stability.

A holiday toast can have special weight.

A holiday toast can have special weight.

Diane Cordell

From reciting blessings to raising a glass to make a toast, holiday traditions are replete with rituals. Laboratory experiments and field studies show that the structured and repetitive actions involved in such rituals can act as a buffer against anxiety by making our world a more predictable place.

Many of those rituals may of course also be performed at other times throughout the year. But during the holiday season, they become more meaningful. They’re held in a special place (the family home) and with a special group of people (our closest relatives and friends). For this reason, more people travel during the year-end holidays than any other time of the year. Gathering together from far-flung locations helps people leave their worries behind, and at the same time lets them reconnect with time-honoured family traditions.

Happy meals

No holiday tradition would be complete without a festive meal. Since the first humans gathered around the fire to roast their hunt, cooking has been one of the defining characteristics of our species.

The long hours spent in the kitchen and the dining room during the preparation and consumption of holiday meals serve some of the same social functions as the hearths of our early ancestors. Sharing a ceremonial meal symbolises community, brings the entire family together around the table and smooths the way for conversation and connection.

All cultures have rituals that revolve around food and meal preparation. Jewish tradition dictates that all food must be chosen and prepared according to specific rules (Kosher). In parts of the Middle East and India, only the right hand must be used for eating. And in many European countries, it is important to lock eyes while making a toast in order to avoid seven years of bad sex.

Hosts pull out all the stops for over-the-top holiday feasts.

Hosts pull out all the stops for over-the-top holiday feasts.

+Simple on Unsplash

Of course, special occasions require special meals. So most cultures reserve their best and most elaborate dishes for the most important holidays. For example, in Mauritius, Tamil Hindus serve the colourful “seven curries” at the conclusion of the Thaipussam kavadi festival, and in Greece families get together to spit-roast an entire lamb on Easter Day. And these recipes often include some secret ingredients – not just culinary, but also psychological.}

Research shows that performing a ritual before a meal improves the eating experience and makes the food (even just plain carrots!) seem tastier. Other studies found that when children participate in food preparation they enjoy the food more, and that the longer we spend preparing a meal, the more we come to appreciate it. In this way, the labor and fanfare associated with holiday meals virtually guarantees an enhanced gastronomical experience.

Sharing is caring

It is common to exchange presents during the holiday period. From a rational perspective, this might seem pointless, at best recycling resources or, at worst, wasting them. But don’t underestimate the importance of these exchanges. Anthropologists have noted that among many societies ritualised gift-giving plays a crucial role in maintaining social ties by creating networks of reciprocal relationships.

Gifts under the tree can be a key component of Christmas celebrations.

Gifts under the tree can be a key component of Christmas celebrations.

Andrew Neel on Unsplash

Today, many families give each other lists of desired presents for the holidays. The brilliance of this system lies precisely in the fact that most people end up getting what they would buy anyway – the money gets recycled but everyone still enjoys the satisfaction of giving and receiving gifts.

And as this is a special time of the year, we can even allow ourselves some guilt-free indulgence. Last year, my wife and I saw a fancy coffee machine that we really liked, but we decided it was too expensive. But in December, we went back and bought it as a mutual present, agreeing that it was OK to splurge a bit for the holidays.

The stuff family is made of

The most important function of holiday rituals is their role in maintaining and strengthening family ties. In fact, for relatives who live far apart, holiday rituals may be the glue that holds the family together.

Rituals and traditions can help make our memories of holidays good ones.

Rituals and traditions can help make our memories of holidays good ones.

Darren Coleshill on Unsplash

Ritual is a powerful marker of identity and group membership. Some of my own field studies have found that taking part in collective rituals creates feelings of belonging and increased generosity toward other members of the group. It’s no surprise, then, that spending the holidays with the in-laws for the first time is often regarded as a rite of passage – a sign of true family membership.

Holiday traditions are particularly important for children. Research shows that children who participate in group rituals become more strongly affiliated with their peers. In addition, having more positive memories of family rituals seems to be associated with more positive interactions with one’s own children.

Holiday rituals are the perfect recipe for family harmony. Sure, you might need to take three flights to get there, and they will almost certainly be delayed. And your uncle is bound to get drunk and start a political argument with his son-in-law again. But according to Nobel Laureate Daniel Kahneman, this is unlikely to spoil the overall experience.

Kahneman’s research shows that when we evaluate past experiences, we tend to remember the best moments and the last moments, paying little attention to everything else. This is known as the “peak-end rule.”

In other words, our memory of the family holiday will mostly consist of all the rituals (both joyful and silly), the good food, the presents and then hugging everyone goodbye at the end of the night (after your uncle made up with his son-in-law). And by the time you get back home, you’ll have something to look forward to for next year.

This article was originally published on The Conversation and is republished here with permission. Read the original article.

Original Link

Brain lesions contribute to criminal behaviour, study finds

Ed Gein was repeatedly beaten around the head by his alcoholic father. Gary Heidnik fell from a tree at age six and hit his head hard enough to deform his skull. Jerry Brudos, electrocuted as a 21-year-old, complained of recurring migraine headaches and blackouts. Edmund Kemper suffered a head injury after crashing his motorcycle.

These four individuals, among the half-dozen infamous serial killers whose crimes inspired the fictional character Buffalo Bill in the movie The Silence of the Lambs, demonstrate the frequency of suspected brain injuries as potential contributing to the making of the very worst criminals.

Now research led by neurologist Ryan Darby, of Vanderbilt University Medical Centre in Nashville, Tennessee, US, has shed light on how brain ‘lesions’ – identifiable injuries caused by trauma, stroke or tumours – contribute to criminal behaviour, through damaging not just one specific area but any number of regions involved in the complex networked neurological processes of higher human brain function.

Based on mapping the brains of 17 individuals convicted of crimes ranging from fraud and theft to assault, rape and murder – the research, published in Proceedings of the National Academy of Sciences – suggests criminality is linked to impairment of neural networks involved in morality, value-based decision-making and ‘theory of mind’, the cluster of abilities for understanding the mental processes and perspectives of others.

No impairment was shown to neural regions involved in cognitive control or empathy, the researchers report. That suggests some distinction between what makes a criminal and what makes a psychopath or sociopath, for whom lack of empathy is considered a key feature. This might explain why some psychopaths become murderous monsters – as in the case of the aforementioned serial killers – while others become successful corporate managers.

It is an “extremely interesting” result, says Winnifred Louis, a psychologist at the University of Queensland in Australia, whose research is focused on the influence of identity and norms on social decision making.

While theory of mind – the capacity to understand other people’s point of view – is like a cognitive antecedent of empathy, Louis notes the reported absence of a connection to lower empathy highlights these criminals’ lack of victim-focus.

“The authors suggest that what is happening in this data is that the lesions are biasing people towards more utilitarian decisions,” she says.

Their brain impairments prevent the processing of personal moral harm to others while leaving intact utilitarian reward calculations for the self. “This is a fascinating suggestion that invites future research,” she notes.

Of the 17 individuals whose brains were mapped, 15 had no criminal record prior to their brain injuries while two showed no criminal behaviour after their brain injuries were treated. The results were supported by data from mapping the brains of 23 other criminals where there far was less definite evidence that neurological damage contributed directly to criminal behaviour. These subjects showed differently located injuries, identifiable through different sequences of MRI or CT scans, but all within a “unique connected brain network distinct from injuries not associated with criminal behaviour”.

While the findings shed light on the specific neurobiology that contributes to criminal behaviour, what makes any one person a criminal remains confoundingly complex. Factors, including genetics, upbringing and social environment, are ingredients in a murky soup.

Edmund Kemper, for example, had already murdered his grandparents years before his motorcycle accident. The brain of Ted Bundy, another source of inspiration for Buffalo Bill, showed no sign of trauma when it was removed and examined after his execution. Previous research shows only a minority of patients suffering brain injuries associated with the network identified by this research commit crimes, while only a fifth of 239 mass-murderers in a 2014 study had a definite or suspected head injury.

“Our work can suggest these lesions increase the likelihood of criminal behaviour,” Darby agrees, “but we do not know the extent to which that happens, as many patients with lesions in these areas do not commit crimes.”

The results certainly help to explain why attempts to identify a specific “criminal behaviour” part of the brain have proven unsuccessful, says clinical neuropsychologist Fiona Kumfor, of Australia’s University of Sydney, whose own research focus is on frontotemporal dementia (which studies have linked to some form of criminal behaviour in a quarter to half of all cases). “There are likely other factors we still don’t understand that causes people to behave in this way,” she says.

One weaknesses of the study, Kumfor says, is it being based on patients, all previously published in scientific literature, who had undergone different cognitive assessments and neuroimaging. “Thus, the evidence for these abilities being involved comes from the brain network neuroimaging analysis only,” she notes. “It will be important for future studies to examine these potential functional brain networks and their associated abilities prospectively.”

Original Link