Applied Mathematics
p-ISSN: 2163-1409 e-ISSN: 2163-1425
2019; 9(3): 67-81
doi:10.5923/j.am.20190903.01

Edward Obeng Amoako
Department of Mathematics, KNUST, Kumasi, Ghana
Correspondence to: Edward Obeng Amoako, Department of Mathematics, KNUST, Kumasi, Ghana.
| Email: | ![]() |
Copyright © 2019 The Author(s). Published by Scientific & Academic Publishing.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

SMEs (small medium-scale enterprises) occupy a central part of the economy in most developing countries all around the world. SMEs are noted as the bedrock of the emerging private sector in developing countries and government assistance is paramount to sustaining and growth for the sector’s contribution to the country’s economy (World Bank report, 2000). Despite effort to enhance performance of this sector very little or no attention is given to become business recovery consciousness when uncertain events occur in the line of business. Losses caused by events such as occupational hazards, theft and burglary, traffic and motor accidents, fire outbreaks and accidental damage to property and harm caused to lives as well as unknown events have slowed down the activities of the sector and in some instance have discontinued some businesses in the sector. The need to acquire insurance cover is cardinal to both the public, private sector and stakeholders, most importantly for the success and longevity of SMEs in the country. The basic motive of the research is to establish the predictors that affect SMEs in the Kumasi metropolis to patronise non-life insurance policies as a risk management tool.
Keywords: Life insurance, Risk Management, Life Assurance, Non-Life Insurance, Developing Country
Cite this paper: Edward Obeng Amoako, Patronage of Non-Life Insurance Policies: The Case of SMEs in Kumasi Metropolis, Applied Mathematics, Vol. 9 No. 3, 2019, pp. 67-81. doi: 10.5923/j.am.20190903.01.
|
|
SPSS will be able to calculate the coefficients, which are interpreted as similar to linear regression coefficients.Advantages of logistic regressionLogistic regression is highly effective at estimating the probability that an event will occur. For this reason, it has been applied to medical research, where it is used to estimate the likelihood of individuals recovering from surgery. Logistic regression differs from other analytic techniques in a number of ways. As the above examples indicate, logistic regression creates for the likelihood that an event occurs, given a set of conditions. This is something that a logistic regression can test.Logistic regression offers the same advantages as linear regression, including the ability to construct multivariate models and include control variation. It can perform analysis on two types of independent variables - numeric and dummy variables - just like linear regression. In addition, logistic regressions offer a new way of interpreting relationships by examining the relationships between a set of conditions and the probability of an event occurring. Assumptions of Logistic Regressioni. Logistic regression does not assume a linear relationship between the dependent and independent variables.ii. The dependent variable must be a dichotomy (2 categories).iii. The independent variables need not be interval, nor normally distributed, nor linearly related, nor of equal variance within each group.iv. The categories (groups) must be mutually exclusive and exhaustive; a case can only be in one group and every case must be a member of one of the groups.v. Larger samples are needed than for linear regression because maximum likelihood coefficients are large sample estimates. A minimum of 50 cases per predictor is recommended. Likewise, a highly skewed numeric variable is not well suited to linear regression analysis, because linear regression requires a normal distribution.Relationships through ProbabilitiesLogistic regressions predict likelihoods, measured by probabilities, odds, or log-odds. Often people speak of “probabilities” and “odds” as being the same thing, but there is an important distinction. A probability is the ratio of the number of occurrences to the total number of possibilities. It is easy to convert back and forth between probability and odds, as they give the same information.Probabilities range from 0 to 1, whereas odds range from 0 to infinity. An odds of one indicates equal probability of occurrence and non-occurrence (0.50). An odds greater than 1 indicates that occurrence is more likely than non-occurrence. An odds less than 1 indicates that occurrence is less likely than non-occurrence. Distinguishing probabilities from odds is very important, not only for accuracy in reporting findings, but also for the interpretation of the logistic regression coefficients and graphs that we will be creating. Note here that even when findings are reported as odds, they can be converted to probabilities using the following formula:Probability
or
P: probability of Y occurringe: natural logarithm baseb0: interception at y-axisb1: line gradientbn: regression coefficient of XnX1: predictor variableX1 predicts the probability of YLog of the Odds and the odds ratioThe Logits (log-odds) are the b coefficients (the slope values) of the regression equation.The slope can be interpreted as the change in the average value of Y, from one unit of change in X. Logistic regression calculates changes in the log-odds of the dependent, not changes in the dependent value as OLS regression does. For a dichotomous variable the odds of membership of the target group are equal to the probability of membership in the target group divided by the probability of membership in the other group. Odds value can range from 0 to infinity and tell you how much more likely it is that an observation is a member of the target group rather than a member of the other group. Another important concept is the odds ratio (OR), which estimates the change in the odds of membership in the target group for a one unit increase in the predictor. It is calculated by using the regression coefficient of the predictor as the exponent. Omnibus Tests of Model CoefficientsThe overall significance is tested using the Omnibus tests of Model Coefficients, which is derived from the likelihood of observing the actual data under the assumption that the model that has been fitted is accurate. There are two hypotheses to test in relation to the overall fit of the model:
Test of SignificanceHosmer and Lemeshow testAn alternative to model chi square is the Hosmer and Lemeshow test which divides subjects into 10 ordered groups of subjects and then compares the number actually in the each group (observed) to the number predicted by the logistic regression model (predicted). The 10 ordered groups are created based on their estimated probability; those with estimated probability below 0.1 form one group, and so on, up to those with probability 0.9 to 1.0. Each of these categories is further divided into two groups based on the actual observed outcome variable (success, failure). The expected frequencies for each of the cells are obtained from the model. A probability (p) value is computed from the chi-square distribution with 8 degrees of freedom to test the fit of the logistic model.If the Hosmer and Lemeshow goodness-of-fit test statistic is greater than 0.05, as we want for well-fitting models, we fail to reject the null hypothesis that there is no difference between observed and model-predicted values, implying that the model’s estimates fit the data at an acceptable level. That is, well-fitting models show non-significance on the Hosmer and Lemeshow goodness-of-fit test. This desirable outcome of non-significance indicates that the model prediction does not significantly differ from the observed.Test for Goodness of fit under Logistic regression model (Hosmer-Lemeshow test)H0: Model fits the data wellH1: Model does not fit the data well![]() | Figure 1. The Sub- Metropolitan Map of the Kumasi Metropolitan Area. (Source: Town and Country Planning Department, 2017) |
|
![]() | Source: Researcher’s illustration from field survey, 2017 |
|
|
|
|
|
|
![]() | Source: Researcher’s illustration from field survey 2017 |
![]() | Source: Researcher’s illustration from field survey 2017 |
|
![]() | Source: Researcher’s computation from field survey 2017 |
|
|
|
|

|
Versus
From the chi-squared value of 22.187, degrees of freedom 7 and significance value of 0.002 leads to the rejection of the null hypothesis and concluding that the model coefficients are significantly different from zero. Hosmer and Lemeshow This is a reliable goodness of fit test of the model in Spss output. The model is a good- fit of the data when the significance value is greater than 0.05.Hypothesis testing
The model fits the data
The model does not fit the data
|
Therefore the null hypothesis
is not rejected and we conclude that, the observed numbers of households are not significantly different from those predicted by the model and hence the overall model is a good fit of the data.
|
Where, 