Society for Alliance, Fidelity and Advancement

SAFA Calculator v1.1

Home Member's page IJBMS SAFA


In academic publication there is no standard decision making process. Commonly the process is postulated by an editorial board of which an editor is the leader. Thus, the decision differs and leads to an ‘acceptance’ or ‘rejection’ recommendation for the same manuscript simultaneously in two different journals. This is called ‘misery of recommendation’.


Out of all kinds of academic publications, Journal publication warrants more care and attention than the others. Most of the potential submissions are processed by a journal to explore the suitability for possible publication. An evaluation process is the most common technique that facilitates the decision making process in journal publication. Often an end-decision is made by an editor based on two or three double blind evaluation reports. Seldom open evaluation method is used for making decision. The evaluation reports are often different from each other in terms of recommendation. An editor’s final decision based on this reports may be associated with human factor bias.


It seems that at an evaluator’s stage a recommendation on a manuscript reflects individual opinion. Thus, there is less opportunity to argue on the comments except the experience and expertise of the evaluator. It is clearly understood that due to this all issues making a decision by an editor is critical to a great extent.   


An editor or editorial board’s decision is actually a synthesized outcome of the evaluation reports. Since the basis of a decision is an evaluation process, one must be aware of the shortfalls of an evaluation process. The decision-making process based on an instrument-based evaluation is not very accurate due to human factor bias. Whatever the comments given by individual evaluator the final decision depends on an editor’s or editorial boards’ view. A manuscript may have potential contents to be published but due to one evaluator’s view it may be rejected. That means traditional evaluation process does not produce hundred percent accurate decision.  


The traditional approach in decision making is solely dependent either on instruments or evaluators’ note on certain aspects of a manuscript. Till date this is the best way being utilized by publication authorities.  here we introduce a new quantitative approach that is able to eliminate the limitations of traditional decision making process by minimizing the bias. This new approach in decision making is a combination of traditional Instrument Based Assessment (IBA) approach and mathematical tools. The approach is called the “Standardized Acceptance Factor Average (SAFA™)” that provides many conveniences in making an end-decision. The details on this new approach are discussed in the following section.


A manuscript is a written script of scientific work with several aspects such as logic, consistencies and so on. Does an evaluator have always enough knowledge to focus on all these? Suppose, if a reviewer is eligible enough but how his or her opinion is to be measured and summarized on different aspects of a manuscript remains unanswered. However it is done, human factor becomes one of the most concerning issue in evaluation process. If it is possible to incorporate evaluator’s (reviewer) efficiency in making a decision may produce a better decision than just relying completely on their 'Yes' or 'No'.


The approach used in traditional review process uses a standard review format which is named as 'Instrument Based Assessment’ approach' or 'IBA'. The IBA is the most popular approach till the moment for decision making in publication.  The IBA tools, that is, the instruments used in IBA differ from one journal to another that addresses another issue of variability in quality. In IBA there is always a list of options on recommendations followed by the principal instrument of several items. The quantity of items ranges from 6-12. The objective of having a list of recommendations followed by the principal instrument is to converge the qualification of each item to a common recommendation such as 'accept' or 'reject' (often there are more options). Making a recommendation out of a few options based on the principal instrument leads to another bias. By examining many reports I found that there are always inconsistencies between the recommendation by an evaluator and the item scoring. Mostly this bias is irremovable in traditional end-decision making approach. This bias is named as 'inconsistent recommendation bias’.


An end-decision is substantially directed by the recommendation in an evaluation report. This event again causes bias because, for instance, if there are three review reports and the recommendations are  not identical, an editor has to make a decision which may not be completely accurate. Thus, there is a chance of bias. This issue is termed 'end-decision bias'. 


In conclusion, the IBA is associated with major three biases - 'reviewers’ attribute bias', 'inconsistent recommendation bias', and 'end-decision bias'. These three biases cause less-efficiency in decision making in academic publication. In order to minimize the total bias caused by the above three issues the SAFA™ system has been proposed.


The users who have purchased the software require to download the registration form and return to the following email address to activate your copy.

SAFA Calculator

Email the registration form to

SAFA Calculator v1.1 has 2 Editions - Personal Edition and Corporate Edition

Email to to purchase a copy


The Standardized Acceptance Factor Average (SAFATM) is a mathematical framework to facilitate decision making towards acceptance or rejection of a submission for possible publication. Generally, such decision is made based on evaluator’s opinion but since all the evaluators are not at similar stand there may need an adjustment to the decision to be made. In order to estimate the SAFA, a standard double blind peer review process is used with the incorporation of evaluator’s experience and expertise. The SAFATM can be an option to eliminate the ‘misery of recommendation’.


The estimation of the SAFATM is solely dependent on a structured evaluation form which can be varied accordingly with the required adjustment by a publication authority. In this section we intend to discuss the development of the SAFA briefly without any technical jargon.


By reviewing 20 review reports submitted to the International Journal of Management and Entrepreneurship (ISSN 1823-3538) and the International Journal of Business and Management Research (1985-3599), it is seen that the review score is inconsistent with the recommendation (‘accept’ or ‘reject’). For instance, while a review score is 0.73, ‘Accept with minor correction and re-review’ recommendation is made. In another case while the review score is 0.43, the same decision is made by another evaluator. This inconsistency causes less-efficiency to the process which may be corrected by adjusting the review score according to reviewer’s efficiency (experience and expertise) and applying averaging technique to minimize the bias.


SAFA™ calculator v1.1 is a tool that can be used for better decision making in academic publication. The SAFA™ calculator v1.1 mainly composed of a data entry and decision display panel. Data entry panel is consisted of four parts – manuscript scroll panel, determinant panel, correction panel and decision display panel.  The display panel produces the SAFA. The above figure  reveals a screenshot of SAFA™ Calculator v1.1. A complete introduction of different panels and parameters of SAFA Calculator v1.1 can be found in the provided user’s guide with this book and software. SAFA™ can be useful to rank academic articles, too. Online use of SAFA can save substantial time of an evaluator and editor as well.


A general decision rule of using the SAFA is that if a manuscript has a value of SAFA equal to or more than 0.5, the paper can be accepted for publication from the evaluation point of view. Further justification for acceptance or rejection relies on an editor’s view. If the SAFA ranges between 0.40 and 0.49, an article can be considered for further revision and possible publication. The SAFA can be used for categorizing articles according to the value. For example, if an article falls within a range of SAFA between 0.30 and 0.39 can be selected for proceedings.  It is to be remembered that how and what standard will be maintained for accepting an article is completely an independent decision by the respective organization. The SAFATM is flexible (corporate version) in changing the cut off point and can be adjusted as per the requirement of a discipline.  


The following table reveals the SAFA scores for each article included in the inaugural issue of the International Journal of Business and Management Research (1985-3599).


Table : SAFA score of the articles in descending order


*Research note has not been included in the ranking

**Standardized Acceptance Factor Average (SAFA)

Copyright © 2008 Safa. Last modified: 12/19/09. Designed, developed and managed by Society for Alliance, Fidelity and Advancement

But at least partially designed by a mainstream celebrity will cause many of those critical of the replica watches to nearly short circuit in a commenting frenzy. While breitling replica sale obvious that this piece is going to be geared towards the mainland UK market, I think this can be seen as a viable men watch in the Western market as well. Les Artisans de replica watches sale make other models in even more limited production runs, but my point is to demonstrate that at the end of the day, these are products that will appeal to the replica watches uk who just likes the concept, is attracted to the watch, and for whom the cost of this watch is as rolex replica as the percentage. Personally, I think it looks great, and even considering how many rolex replica uk the aBlogtoWatch team sees every day.