Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Market Segmentation, Targeting, and Positioning

Zhixian Yi, in Marketing Services and Resources in Information Organizations, 2018

4.3 Segmentation Methods

Segmentation can be carried out based on geographic, demographic, geodemographic, behavioral, lifestyle, and psychological characteristics (De Saez, 2002, pp. 118–123). Characteristics such as gender, age, race, occupation, religion, and education are used in demographic segmentation (De Saez, 2002). Geographic segmentation is a simpler segmentation based on regions and locality. Behavioral segmentation takes into account usage statistics, for example, the number, type and branch location of borrowing to differentiate the market (Millsap, 2011). There are many segmentation methods, and more than one may be required to achieving distinctive segments. Common methods include:

Geographic segmentation refers to dividing a market into different geographical units;

Demographic segmentation means dividing the market into segments based on variables such as age, life-cycle stage, gender, income, occupation, education, religion, ethnicity, and generation;

Psychographic segmentation is dividing a market into different segments based on social class, lifestyle, or personality characteristics; and

Behavioral segmentation refers to dividing a market into segments based on consumer knowledge, attitude, uses, or responses to a product (Kotler & Armstrong, 2014, pp. 193–198; Kotler and Keller, 2009, pp. 213–226).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081007983000040

The Psychology of Learning and Motivation: Advances in Research and Theory

Jeffrey M. Zacks, Jesse Q. Sargent, in Psychology of Learning and Motivation, 2010

2 Event Segmentation Theory

Event Segmentation Theory (EST) describes how and why our nervous systems segment ongoing experience into discrete episodes (Zacks, Speer, Swallow, Braver, & Reynolds, 2007; see also Kurby & Zacks, 2008; Swallow & Zacks, 2008). For example, consider what might happen during a typical visit to a coffee shop: you wait in line, you give your order, you pay, you put cream in your coffee, you leave. Different people will generate somewhat different lists of activities, but all are able to describe experience across time as organized into distinct units and overall there will be considerable agreement across individuals regarding what those units are. EST proposes this happens because, as part of normal perceptual processing, humans automatically segment episodes into units. In fact, EST suggests that the ongoing segmentation of experience is at the center of cognitive control, working memory (WM) updating, and storage and retrieval from episodic memory.

The core components of EST, corresponding hypothesized neurophysiological structures, and the basic flow of information are illustrated in Figure 1. Reference to Figure 1 may be helpful as we describe the components of EST and review some of the relevant empirical evidence below. For a more detailed presentation of the neurocognitive account, see Zacks et al. (2007). For a more detailed computational presentation and computer simulation results, see Reynolds, Zacks, and Braver (2007).

Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Figure 1. Schematic depiction of the model, with hypotheses about the neurophysiological structures corresponding to the different components of the model. Thin gray arrows indicate the flow of information between processing areas, which are proposed to be due to long-range excitatory projections. Dashed lines indicate projections that lead to the resetting of event models. PFC, prefrontal cortex; IT, inferotemporal cortex; MT+, human MT complex; pSTS, posterior superior temporal sulcus; ACC, anterior cingulate cortex; SN, substantia nigra; VTA, ventral tegmental area; LC, locus coeruleus; A1, primary auditory cortex; S1, primary somatosensory cortex; V1, primary visual cortex. (Adapted with permission from Zacks et al., 2007.)

EST starts from the supposition that some of the most important products of perception and comprehension are predictions about what will happen in the near future. Prediction is front and center in many contemporary accounts of perceptual processing (Enns & Lleras, 2008), learning (Schultz & Dickinson, 2000), and language (Elman, 2009). Good predictions are adaptive because they allow one to plan actions more successfully (e.g., avoiding hazards or intercepting desired objects). Also, good predictions can facilitate efficient perceptual processing. For example, if a pitcher winds up and completes a throwing motion, the perceptual system anticipates that the ball will fly out of the pitcher's hand toward home plate. In the absence of such anticipation, perceiving the ball whizzing through the air would be much more difficult—in fact, one might miss it altogether!

According to EST, prediction is abetted by WM representations called event models. Event models may be thought of as representations of what-is-happening-now. EST suggests that all perceptual input is processed in the context of a currently activated conception of what-is-happening-now. Our conceptualization of event models borrows heavily from work on situation models in discourse comprehension (e.g., Zwaan & Radvansky, 1998). Event models represent those aspects of a situation that are consistent within an event, while ignoring those aspects that vary haphazardly from moment to moment. Such representations are helpful not only for prediction but also because they allow the disambiguation of ambiguous sensory information and the filling-in of missing information. For example, at a baseball game an event model would represent the location of the baseball while it is hidden in the pitcher's glove. We have proposed that event models are maintained in lateral prefrontal cortex (PFC). Event models combine current perceptual information with information acquired very recently in the present context, and with patterns of information learned over a lifetime of experience. For example, if you have never seen a baseball game, the first time the pitcher sets up to throw, you may have very little idea where the ball will go. As the pitch count goes up, your expectation that each upcoming pitch will go to home plate increases. However, if you are an experienced baseball fan, each pitch in an at-bat is perceived in the context of an event model informed by relatively stable long-term semantic memory about what happens at ball games. In EST, these long-term weight based representations are referred to as event schemata. In contrast, event models are activation-based WM representations. So, the content of an event model may overlap at any given time with a particular event schema, but when an event model ceases to have predictive value, it can be rapidly and completely updated to reflect the changing situation. We propose that event schemata as well as event models are implemented by the lateral PFC. A number of studies suggest that representations of events are maintained in the anterior, lateral PFC (e.g., Grafman, 1995; Schwartz et al., 1995; Wood & Grafman, 2003). We review some of this evidence in more detail in Section 6. The exact nature of the interaction between event models and event schemata is currently a topic of active research.

So, while event models may be informed by current perceptual information, they can also influence how the perceptual system processes that incoming information (see Figure 1). For example, as described above, information provided by event models allows the visual system to anticipate the flight of a baseball before it is released by the pitcher. However, event models may facilitate processing of all types of sensory information across numerous, distributed brain regions. Perceptual analysis is accomplished by hierarchically organized neural systems specialized for vision, hearing, touch, and the other sensory modalities. For example, in the visual system (Felleman & Van Essen, 1991), information is initially represented in terms of simple local visual features in the early visual areas (V1 and V2, in the posterior occipital cortex). Successive processing stages form representations that are increasingly extended in space and time. Two broad streams process information important for object identification and for motor control relatively separately (Goodale, 1993). Features relevant to object identity and category are differentially represented in inferior temporal cortex (IT), whereas features related to motion and grasping are differentially represented in dorsal regions including the human MT complex (MT+) and the posterior superior temporal sulcus (pSTS). Although there is communication between the streams and massive feedback throughout the system, these systems can be described as hierarchically organized, following a rough posterior-to-anterior spatial organization. Many of the classical studies characterizing these perceptual systems were conducted in nonhuman primates and relied on radically simplified stimuli. However, recent neuroimaging studies have shown similar responses in these areas across individuals during movie viewing (Bartels & Zeki, 2004; Hasson, Nir, Levy, Fuhrmann, & Malach, 2004; Hasson, Yang, Vallines, Heeger, & Rubin, 2008). EST proposes that event models bias processing in these streams. As we shall see shortly, EST also proposes that the updating of event models regulates processing over time in these streams.

A critical feature of event models is that they need to be protected from moment-to-moment changes in sensory and perceptual information. Updating one's event model to delete the baseball when it disappeared from sight would clearly be counterproductive. However, event models have to be updated eventually in order to be useful—the baseball game model will not be helpful at a gas station! The question is, when and how can event models be updated adaptively? EST's answer is that event models are updated in response to transient increases in prediction error, mediated by systems in the anterior cingulate cortex (ACC) and midbrain neuromodulatory systems. The ACC maintains predictions and constantly compares them to actual inputs, producing an online error signal. Studies have shown this region to be sensitive to the commission of overt errors and to covertly measured cognitive conflict (e.g., Botvinick, Braver, Barch, Carter, & Cohen, 2001) and to the learning of sequential behaviors (Koechlin, Danek, Burnod, & Grafman, 2002; Procyk, Tanaka, & Joseph, 2000). When prediction error increases suddenly, this is detected by monitoring systems in the midbrain, which broadcast a global reset signal to the cortex. This system may include dopamine-based signaling subserved by the substantia nigra (SN) and ventral tegmental area (VTA) and norepinephrine-based signaling subserved by the locus coeruleus (LC). Neurons in the SN and VTA are sensitive to errors in reward prediction (e.g., Schultz, 1998). Dopamine cells in the SN and VTA project broadly to the frontal cortex, both directly and through the striatum, providing a mechanism for a reset signal such as is posited by EST. The LC has been implicated in regulating the sensitivity of an organism to external stimuli (e.g., Usher, Cohen, Servan-Schreiber, Rajkowski, & Aston-Jones, 1999). It also has broad connections to the cortex, these based on norepinephrine rather than dopamine. The reset signal transiently opens an input gate on the event models, exposing them to the early stages of sensory and perceptual processing (see Figure 1). This produces a short burst of increased activity in the perceptual processing stream and the event models settle into new states. As the event models are updated, predictions become more adaptive and errors decrease. The system returns to a stable configuration. A schematic representation of the temporal dynamics of the error-based updating process is shown in Figure 2.

Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Figure 2. Temporal dynamics of event segmentation. Most of the time prediction error is relatively low and event models are stable. As a model becomes less adaptive, prediction error increases. In response, information form sensory and perceptual processing is gated into the model, updating its contents. After updating, error declines and the model settles into new state.

According to this account, event segmentation is an ongoing concomitant of everyday experience, which happens without intent or necessarily awareness. The processing that occurs at event boundaries can be viewed both as focal attention and as memory updating. An appropriate (stable) event model is a WM buffer whose outputs bias processing in that stream. The opening up of event models' inputs is a form of focal attention, and the settling into a new state is a form of memory updating. Event segmentation in and of itself is not the goal of the system; instead, it is a by-product of mechanisms evolved in support of a more efficient, predictive perceptual system.

Important for thinking about how EST applies to daily experience, it is suggested that event segmentation occurs simultaneously at multiple time scales. Consider the coffee shop example given above. If one's event model for going to a coffee shop generates predictions consistent with all the distinct units of activity typically involved (e.g., waiting, ordering, paying), then no error signal would be generated, the model would be stable throughout the episode, and no event boundaries would occur. So, how does EST explain the segmentation of going to a coffee shop into distinct units of activity? We may consider events as hierarchical representations. The event “going to a coffee shop” is at a higher level in the hierarchy than the events “waiting in line” and “ordering”. Lower level aspects of an event representation are sensitive to prediction error signals integrated over shorter time scales. So, when it comes time to place an order, the “waiting in line” level of the hierarchical event representation generates some degree of prediction error. That hierarchical level becomes unstable until the “ordering” model is instantiated at which point the error signal decreases. Meanwhile, at a higher level of the event representation, “going to a coffee shop” is insensitive to such short lived error signals. Higher levels are sensitive to error signals integrated over longer time scales. When one leaves the coffee shop, it is likely that there will be a more prolonged increase in error. The resulting prolonged error signal causes instability at a higher level of the hierarchical representation and “going to a coffee shop” is abandoned for a more adaptive model. In accordance with this explanation, we would expect models at higher hierarchical levels to make less specific predictions. Also, we would expect boundaries between events at higher hierarchical levels to align with boundaries between events at lower levels.

2.1 Prior Evidence

EST makes a number of claims about behavior and brain function, some of which are consistent with previous research and some of which have been tested directly. First, EST predicts that event segmentation is an ongoing part of normal perceptual processing. Evidence for this proposal comes from behavioral and functional magnetic resonance imaging (fMRI) studies. In a typical event segmentation paradigm, participants watch movies of actors engaged in everyday activities (e.g., doing laundry) and are instructed to press a button whenever they believe one meaningful unit of activity has ended and another has begun (Newtson, 1973). When instructions direct attention to larger (coarse grain) or smaller (fine grain) units of activity, the behavioral data are thought to reflect ongoing event segmentation at higher or lower levels of hierarchical event representation. Studies have demonstrated that segmentation of videos using this method shows both stable intersubject agreement, and stable individual differences over a period of more than a year (Newtson, 1976; Speer, Swallow, & Zacks, 2003; Zacks et al., 2007). Furthermore, observers spontaneously group fine-grained event boundaries into hierarchically organized coarse-grained events (Newtson, 1976; Zacks, Tversky, & Iyer, 2001). That is, coarse grain boundaries tend to correspond to a subset of fine grain boundaries, which supports the view that event segmentation occurs simultaneously at multiple time scales. The reliability and structure of the data from the segmentation task support the suggestion that this paradigm is capturing an ongoing feature of normal perception. Ultimately, however, these results prove only that individuals can segment ongoing experience into units. Evidence that individuals do segment experience in the course of normal day-to-day perception comes from neurophysiological studies. Using fMRI, Zacks et al. (2001) first monitored participants' brain activity during passive viewing of simple movies. Afterward, participants segmented the movies by indicating whenever, in their view, one meaningful unit of activity had ended and another had begun. During passive viewing, a collection of regions transiently increased in activity at those moments that viewers later identified as event boundaries. These regions included areas in lateral posterior cortex (including the inferior and superior temporal sulci and ventral temporal cortex), medial posterior cortex (including the cuneus and precuneus), and lateral frontal cortex. Similar results have been generated using several variations of this general paradigm (Speer, Zacks, & Reynolds, 2007; Speer et al., 2003; Zacks, Swallow, Vettel, & McAvoy, 2006).

Second, EST predicts that perceptual processing increases at event boundaries. The fact that brain activity transiently increases at event boundaries is consistent with this prediction—particularly suggestive are the increases in posterior regions associated with perceptual processing. It has been shown that memory for perceptual details at or around event boundaries is better than that for details associated with event middles (Newtson & Engquist, 1976; Schwan, Garsoffky, & Hesse, 2000). Also, EST suggests that if the surface structure of events is consistent with the underlying event structure, then event segmentation mechanisms should operate more efficiently, and again, memory for the episode should improve. This too has been borne out in the laboratory (e.g., Schwan & Garsoffky, 2004). For example, Boltz (1992) showed participants a feature film with no commercial breaks, breaks that corresponded to event boundaries, or with breaks placed at nonboundaries. Recall of activity and memory for the temporal order of events in the movie was improved by the breaks at event boundaries and reduced by the breaks at nonboundaries. Further support for the suggestion that segmenting events in a manner that corresponds to their intrinsic structure improves memory for those events comes from a study of individual differences. Zacks, Speer, Vettel, and Jacoby (2006) found that group-typical segmentation of movies, which may be assumed to reflect intrinsic structure, predicted better performance on subsequent memory tests after controlling for overall cognitive level.

Another prediction of EST is that information associated with the current event model, and thus active in WM, should be more accessible than information associated with a previously active model. When using text material, event boundaries can be induced by imposing a change such as a temporal break (e.g., “…a day later…”) or a shift of spatial location (e.g., “the detective burst into the room”). Such shifts result in the perception of an event boundary for films as well (Zacks, Speer, & Reynolds, 2009). Numerous studies using text comprehension have shown results consistent with this prediction (e.g., Bower & Rinck, 2001; Zwaan & Radvansky, 1998). For example, Speer and Zacks (2005) required participants to read narratives and showed that memory for items in the narrative was lower when a temporal break intervened between the mention of the item and the test. Similar results have recently been obtained with movies (Swallow, Zacks, & Abrams, 2009).

In sum, EST proposes that predictions about the near future are guided by WM representations of the current event, which are updated in response to transient increases in prediction error. This updating includes upregulation of the perceptual processing pathways feeding into event models. The experience of an error spike and consequent updating is perceived as a boundary between meaningful events. Thus, event segmentation is an ongoing perceptual mechanism standing at the center of attention, cognitive control, and memory. It is subserved by a distributed set of brain mechanisms described above (see Figure 1). If one or more of these is selectively affected by a disorder or age-related process, it may have substantial consequences for cognition.

In the following sections, we apply EST to the analysis of six conditions in clinical neuroscience. We have selected six conditions based on the overlap between the neurocognitive mechanisms implicated in each and the mechanisms of event segmentation as proposed by EST. The six are: schizophrenia, obsessive-compulsive disorder (OCD), Parkinson's disease (PD), lesions of the PFC, aging, and AD. Our selections are necessarily heuristic and surely incomplete. However, we think the analysis shows the potential for EST to provide new insights regarding major cognitive deficits associated with these disorders.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S007974211053007X

Handbook of Health Economics

Sherry Glied, in Handbook of Health Economics, 2000

6.3. Risk adjustment

Risk segmentation complicates the evaluation of the effectiveness of managed care and has potentially undesirable normative consequences (as discussed above). Furthermore, risk segmentation makes it difficult to design managed care policy. Consider a payer, such as the Medicare program, that operates its own indemnity plan and contracts with managed care plans. If the payer sets managed care payment rates based on the indemnity population, while the managed care plans enroll healthier-than-average enrollees, total costs under the program may increase. If risk segmentation is important, payers must ensure that the rates they pay to managed care plans accurately reflect the risk profile of the population these plans enroll.

For all of these reasons, the increased diversity of insurance plans that has characterized the growth of managed care has encouraged the development of methods that capture differences in the characteristics of enrollees in different plans. These techniques, or risk adjustment methodologies, are summarized in the Handbook chapter on risk adjustment [Van de Ven and Ellis (2000)].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1574006400801729

Auditory Scene Analysis: Computational Models

G.J. Brown, in International Encyclopedia of the Social & Behavioral Sciences, 2001

5 Segmentation

The segmentation stage of CASA aims to represent the auditory scene in a manner that is amenable to grouping. Frame-based systems omit this stage of processing: they operate directly on the acoustic features described above.

In many CASA systems, the segmentation stage makes temporal continuity explicit. Typical is the approach of Cooke (1993), which tracks changes in instantaneous frequency and instantaneous amplitude in each channel of an auditory filterbank to create ‘synchrony strands’ (Fig. 2). Each strand traces the evolution of an acoustic component (such as an harmonic or formant) in the time-frequency plane. This approach offers advantages over frame-based processing: because frame-based schemes make grouping decisions locally in time, they must resolve ambiguities which would have an obvious solution if temporal continuity were taken into account.

Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Figure 2. Segregation of speech from an interfering telephone ring in the ‘symbolic’ CASA system described by Cooke (1993). The utterance is ‘why were you all weary?’ spoken by a male speaker. A group of harmonically related synchrony strands belonging to the speech source are highlighted in gray. The remaining strands (shown in black) predominantly belong to the telephone sound

Neural oscillator models of CASA also exploit temporal continuity in the time-frequency plane. In this approach, groups of features that belong to the same acoustic source are represented by a population of neural oscillators whose firing is synchronized. Other groups of features are also represented by synchronized populations but oscillators coding different sound sources are desynchronized. The model of Wang and Brown (1999) employs an architecture consisting of a ‘segmentation layer’ and a ‘grouping layer.’ Each layer is a two-dimensional network of oscillators with respect to time and frequency. In the segmentation layer, lateral connections are formed between oscillators on the basis of local similarities in energy and periodicity. Synchronized populations of oscillators emerge that represent contiguous regions in the time-frequency plane (Fig. 3).

Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Figure 3. Behavior of the Wang and Brown (1999) system for the sound mixture shown in Fig. 2. Two groups of synchronized oscillators are shown, corresponding to the speech (white) and telephone (gray)

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006641

New Media, News Production and Consumption

Eugenia Mitchelstein, Pablo J. Boczkowski, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

The Consequences of Online News Consumption for Political Knowledge

Segmentation in online news audiences according to their level and type of attention to public affairs content raises the issue of whether access to online news fosters or hinders political knowledge. Scholars disagree on how best to characterize the relationship between Internet information consumption and information acquisition. Some researchers draw on Downs' economic theory of democracy (1957) to propose that the increased accessibility of online news makes it easier for citizens to access political information (Johnson and Kaye, 2003; Tewksbury and Rittenberg, 2009: 197).

But other researchers contend that increased availability of news content online might only increase access to public affairs content for those already engaged in political issues (Margolis and Resnick, 2000). Some scholars have drawn upon the knowledge gap theory to propose that the infusions of information have an uneven effect on citizen knowledge, as the population with higher levels of education tends to acquire this information at a faster rate than those with fewer years of schooling (Jerit et al., 2006). For instance, Yang and Grabe compare knowledge acquisition among South Korean citizens from different educational backgrounds, and conclude that “less educated people comprehended significantly more public affairs news from reading a newspaper than using an online source while more educated people delivered similar comprehension performances across the two media” (2011: 1223).

A different stream of research proposes that use of the Internet might increase the likelihood of unplanned news consumption, as consumers may increase their awareness about public affairs subjects when they are exposed to online information as a consequence of general Web use (Lupia and Philpot, 2005). Rainie and colleagues analyze survey data from the 2004 presidential campaign in the United States and find that “fully half of internet users (50%) said they encountered political news by happenstance browsing” (Rainie et al., 2005: 9).

Scholars have also analyzed whether exposure to online news is more or less conducive to learning political knowledge than use of traditional news media. Some studies show that online information consumption reduces knowledge and recall of current events (Althaus and Tewksbury, 2002; Dalrymple and Scheufele, 2007). Schoenbach, de Waal, and Lauf conduct a survey of the Dutch population and find that “reading print newspapers contributes to awareness of more public events and issues than using online newspapers does” (2005: 253). However, other research indicates that access to online news does not lead to a decrease in information acquisition when compared to traditional news media use (Drew and Weaver, 2006). Kwak and colleagues conducted a survey in Ann Arbor, Michigan, and found that “the Internet, as compared to television, was more useful to younger respondents in their understanding of international affairs” (Kwak et al., 2006: 203).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868950839

Financial Integration in Europe

T. Jappelli, M. Pagano, in The Evidence and Impact of Financial Globalization, 2013

Clearing and Settlement

The segmentation of the clearing and settlement system entails improperly high costs for cross-border trades. Segmentation depends partly on the persistent fragmentation of stock trading platforms. Some exchanges, such as Deutsche Börse, in fact, are vertically integrated, with both a platform to provide trading services and a proprietary clearing and settlement system for the corresponding posttrading services (‘silo structure’). This limits the competition from other trading platforms since new entrants’ customers would still have to use the incumbent's posttrade clearing and settlement system.

Entry foreclosure generates rents for incumbent exchanges, and overcoming this problem is likely to require regulatory action at the EU level. This is recognized both by the EU Commission and by the ECB, which announced in July 2006 that it was considering the desirability of going into the settlement business itself, with a system called ‘Target 2 Securities’ (T2S). The ECB would not be the first public institution to provide central clearing and settlement services. In the United States, the Federal Reserve Board runs a bond settlement business, and both clearing and settlement are the product of the Depository Trust and Clearing Corporation, a user-owned service company created as a direct result of government pressure.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123978745000117

Geodemographics

J. Goss, in International Encyclopedia of the Social & Behavioral Sciences, 2001

4 Consumer Profiling

Contemporary segmentation schemes go well beyond the abstract demographic variables of the census to offer ‘Character: not just characteristics,’ and this system, called Focus, for example, offers ‘multidimensional portraits of real people,’ revealing ‘lifestyle and mindset’ and promising marketers will ‘see how customers view themselves,’ and ‘meet their customers on a first name basis’ (National Demographics and Lifestyles 1993, p. 2). Segmentation schemes draw upon information on brand preferences, media habits, credit rating, lifestyle and values, to divide neighborhoods in the US, typically into 30–40 categories such as ‘Pools and Patios,’ ‘Shotguns and Pickups,’ ‘Fur and Station Wagons,’ and ‘Hard Scrabble’ (PRIZM) or ‘Cautious Young Couples,’ ‘Sustaining Ethnic Families,’ and ‘Young Accumulators’ (MicroVision). Marketers ‘get acquainted’ with these potential customers through visual scenarios representing consumers engaged in a typical activities in front of their home and/or detailed imaginary vignettes describing their typical names and everyday lives—particularly their consumption practices. Although analysts argue their schemes imply no social judgments, the rankings, weightings and premiums charged for names of affluent consumers, and description of segments give them away: to the marketing industry, social identity and value is lifestyle, and ‘you are what you buy’ (Piirto 1991, p. 23).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767025298

Social responsibility, social marketing role, and societal attitudes

Rasa Smaliukiene, Salvatore Monni, in Energy Transformation Towards Sustainability, 2020

Segmentation

Energy users' segmentation divides a large population into groups according to their shared values, wants, and needs. According to segmentation theory, people in the same group are likely to respond to behavioral interventions similarly. Typically, any population is segmented according to demographic characteristics (such as age, gender, ethnicity, etc.); however, as technologies of the Internetera shape everyday behavior, energy users' segmentation is based more on attitudes and lifestyles than on wants and needs. As a result, segmentation of energy users identifies one or more segments in the target audience (Thøgersen, 2017) as there is an in-depth understanding that it is impossible to be effective across all the population. It, therefore, has to be segmented into groups and only a few segments can be targeted with social marketing mix.

As already mentioned, social marketing adopts the methods of commercial marketing, yet its purpose is very different. In business, the same segments are targeted with a variety of accompanying products they might prefer to use. In contrast, social marketing targets behavior only with one goal and this goal is usually associated with the decrease in consumption. There are a few segmentation approaches developed to understand how a population can be segmented according to its attitude toward the environment. As an example, Table 14.1 presents segmentations of UK and US markets. According to these segmentation examples, energy users can be divided into three large groups based on their attitude toward the environment—environmentalists, the environmentally concerned, and the disinterested. How large these groups are and how many segments compose each group depends on values of the society at large. As we can sees fromthe UK and US segmentation results, UK society has more segments that are environmentalist and environmentally concerned. Meanwhile US society's segmentation identifies more unique segments that are indicated as disinterested in the environmental impact of their consumption.

Table 14.1. Segmentation of UK and US populations according to attitude toward environment and climate change.

Segments of the UK populationSegments of the US populationSegment description
Positive greens Liberal greens Environmentalists: are very worried about environmental issues and feel interconnected with the nature, try to conserve whenever they can
Waste watchers
Concerned consumers Outdoor greens Environmentalists: are very worried about environmental issues, environment-friendly behavior makes them feel better
Sideline supporters Religious greens
Cautious participants Middle-of-the-roaders Environmentally concerned: are generally concerned about the environment, but behave environmentfriendly only because of constraints
Long-term restricted
Stalled starters Homebodies Disinterested: they tend toward apathy when it comes to environmental issues, environmental issues do not resonate with them
Honestly disengaged
Disengaged
Outdoor browns
Religious browns
Conservative browns

Based on the data from Public Opinion and the Environment: The Nine Types of Americans. Yale School of Forestry & Environmental Studies, 2015. and Defra, January 2008. A Framework for Pro-environmental Behaviours.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128176887000148

Brains for All the Ages

John E. Richards, Wanze Xie, in Advances in Child Development and Behavior, 2015

3.3 Nonmyelinated Axon Tissue Segmentation in Infants

Tissue segmentation of brain images from infants poses special challenges. The GM and WM contrast-to-noise ratio (CNR) for infant MRI is significantly lower than the CNR for adult brain MRI (Mewes et al., 2006). This results in poor resolution across the spatial aspects of the MRI volume and consequent difficulty in segmenting partial volume regions. During the first 2 years of life, the WM/GM contrast is reversed (as compared to adult contrast) on T1- and T2-weighted images and gradually changes toward the MRI contrast of adult brains (Leppert et al., 2009; Paus et al., 1999; Xue et al., 2007). At around 9 months of age, GM and WM demonstrate roughly the same intensities and cannot be segmented by the sole use of intensity differentiation (Barkovich, 2005; Paus et al., 1999). Additionally, the brain in infants consists of a large amount of nonmyelinated axons (NMA). The T1 relaxation times for NMA and GM are approximately equivalent, so that “neuronal cell bodies” and “nonmyelinated axons” appear the same on T1W scans (e.g., Figures 2 and 3, youngest ages). Through the identification of myelinated and NMA, regional changes of WM and important maturational processes can be distinguished and quantified (Aubert-Broche, Fonov, Leppert, Pike, & Collins, 2008; Barkovich, 2005; Weisenfeld & Warfield, 2009). By about 2 years of age, the contrast found in the developing brain more closely resembles that of an adult brain due to the progression of increasing myelination and decreasing water content (Leppert et al., 2009; Rutherford, 2002).

Nonmyelinated and myelinated axons and cortical and subcortical GM have been analyzed separately in the neonatal brain (Anbeek et al., 2008; Hüppi et al., 1998; Prastawa et al., 2005; Weisenfeld & Warfield, 2009). The different tissue types in the infant brain exhibit significant levels of intensity inhomogeneity and variability, in addition to overlapping intensity distributions (Prastawa et al., 2005; Shi et al., 2010). Some researchers have developed methods to distinguish myelinated and NMA in MRIs. Prastawa et al. (2005) treated myelination as a fractional property, such that the MRI intensities reflected the degree of myelination in partial volume estimates. This procedure was somewhat successful in differentiating myelinated and NMA in the newborn brain. However, the dividing boundaries between the two tissue types were generally ambiguous (Prastawa et al., 2005; Rutherford, 2002), and the results showed mislabeled partial volume voxels (Xue et al., 2007). Others have expanded on the segmentation methodology of Prastawa et al. (2005) through the use of priors or iterative algorithms (Gilmore et al., 2007; Weisenfeld & Warfield, 2009). Hüppi et al. (1998) differentiated between myelinated and NMA in newborn brains and found a fivefold increase in the myelinated WM volume between 35 and 41 weeks postconception. Studies have demonstrated significant reductions (~ 35%) of myelination in preterm infants when compared to term infants (Inder, Warfield, Wang, Hüppi, & Volpe, 2005; Mewes et al., 2006). Neonatal studies showing early rapid developmental changes highlight the importance of delineating the complete progression of the myelination process.

We are working on procedures to create segmented priors for the reference data with GM, WM, CSF, OM, and NMA in the MRI volumes across infant age groups. Our segmentation technique uses both the T1W and T2W classification (Shi et al., 2010) to aid in tissue discrimination. Myelinated axons appear as “white matter” in the T1W volumes and dark matter in T2W scans (adults, older children). NMA appear in the T2W volumes as slightly brighter intensity voxels than GM (young infants). Figure 11 shows our procedure applied to two infants and the average MRI template for infants. The top row shows the identification of WM from the two-class model for a MRI from a 2-week-old participant. The two-class model categorizes WM successfully but classifies NMA with GM (top row, second column, blue (black in the print version) color). We then use the T2-weighted scan (third column) to identify the NMA (fourth column, green (gray in the print version) color) and create a three-class model (GM, WM, and NMA; far right column). Figure 11 (second row) shows the results of this classification for a 6-month-old. Note the higher proportion of WM in the infant brain at this age. The third row shows the results of this analysis for the average MRI templates for infants ranging in age from 3 to 7.5 months of age.

Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Figure 11. Axial slices demonstrating the NMA segmentation. The top and middle rows show the segmentation for a 2-week-old and 6-month-old, respectively. The columns from left to right are the T1W brain, “GM/WM” segmentation, the T2Wand the NMA classified in the T2W, and the three-class segmentation (GM, WM, and NMA). The last row shows the change in the three-class model form 3 to 7.5 months for average MRI volumes and average probability values. The crosshairs on the coronal slices are centered on the anterior commissure. The brightness of the colors for the GM/WM and GM/WM/NMA represent the probability that the voxel belongs to the category (GM, blue (black in the print version); WM, yellow (white in the print version); NMA, green (gray in the print version)).

We examined the changes in the NMA volume across the first year. The identification of GM with two-class models is compromised since NMA and GM are classified in the same category with the two-class (GM and WM) segmentation. So the changes in GM over age in the infancy period (e.g., Figure 8) overestimate the “‘gray matter” (neuron cell bodies, nuclei). Figure 12 shows a similar analysis of the tissue volumes for infants from 3 to 12 months. The changes in WM are the same as before, since myelinated axons are correctly identified with the two-class model. The GM in Figure 12 represents the GM (NMA) and NMA in the same figure. There is a change in volume of the NMA through the first 6 months, likely due to overall changes in axonal growth and synaptogenesis. However, this begins to drop by 7.5–9 months. This should decrease further in the second year.

Is the process of dividing a market into distinct subsets that behave in the same way or have similar needs?

Figure 12. Gray matter, white matter, and nonmyelinated axonssegmented tissue volume in infants as a function of age. The “GM” is from the GM/WM (OM) two-class segmentation, and the “GM (NMA)” is from the GM/NMA segmentation. The error bars represent the standard error of the mean (SE).

The changes we report in WM volume are consistent with other reports, both from MRI analysis (Deoni et al., 2011) and other methods. The rapidly changing myelination likely affects integrated neurological or behavioral functions due to communication across different brain areas (Casey et al., 2000; Deoni et al., 2011). The results of the NMA volume analysis are new. Changes in GM volume have been interpreted as being primarily due to synaptogenesis. This should have a direct influence on behavioral plasticity during this age range as the emergence and pruning of synaptic connections results in learning, language development, memory, and developmental canalization. However, our analysis shows a more gradual increase in GM volume. The measurement of GM development in the first few months is confounded with volumetric increases in nonmyelinated axonal growth, whereas when axonal myelination is reflected in more WM there is an apparent increase in GM that actually reflects NMA decreases. We cannot specifically detail what GM–NMA-behavioral relations would occur with the distinction between GM and NMA, but our methods should result in a refined model of brain–behavior changes over this time period.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065240714000299

Stochastic Approximation Algorithms for Estimation of Spatial Mixed Models

Hongtu Zhu, ... Bradley S. Peterson, in Handbook of Latent Variable and Related Models, 2007

Example 2 (Image segmentation)

Image segmentation is used to classify an image into a set of nonoverlapping regions {R1, …, RK}. We consider a special case of SMMs as follows. The observation at a particular pixel s can be written as

(4)y(s)=Σk=1kΦ(s,βk)fk(s)+ɛ(s ),

where ε(s) ~ N(0, ϕ−1), Φ (.,.) is a parametric model, and βk is the parameter vector for Rk. In addition, f(s) = (f1(s), …, fK(s)), fk(s) ε {0, 1}, ∑K =1Kfk(s)=1, and fk(s) = 1 if and only if s ∈ Rk. Thus, μ(s )=E[y(s)|f]=∑K=1 KΦ(s,βk)fk(s). We further assume that the joint distribution of the label field f = {f(s): s = 1, …, n} is given by

p(f|τ)=exp{τΣsi∼sjδ(f(si),f (sj))−logC(τ)},

where the summation is taken over all nearest-neighbor pairs (si ~ sj), δ(x, z) is the Kronecker function equaling to 1 when x = z and 0 otherwise, and τ is the parameter controlling the granularity of the field. In addition, C(τ) is obtained by summing over all possible configurations f (e.g., nM terms).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444520449500215

Is the process of dividing a market into subsets that behave in the same way or have similar needs?

Market segmentation is the process of dividing a broad consumer or business market, normally consisting of existing and potential customers, into sub-groups of consumers (known as segments) based on some type of shared characteristics.

Is the process of dividing a market into distinct groups of buyers who have different needs characteristics or behavior?

Market segmentation is the process of dividing a market of potential customers into groups, or segments, based on different characteristics.

Is the process of dividing a market into smaller groups of buyers with distinct needs characteristics or behaviors who might require separate products or marketing mixes?

Market segmentation is the process of identifying distinct groups and sub group of customers in the market, who have distinct needs, characteristics, preferences and/or behavior, and require separate product and service offerings and corresponding marketing mix.

What is the process of dividing the total market into several groups with similar characteristics?

Market Segmentation Approach. Market segmentation is the process of dividing a total market into market groups consisting of people who have relatively similar product needs, there are clusters of needs.