Part not directly related to Knowledge Sharing (KS) about
"Formalizing, Organizing and Evaluating Cooperation Related
Tasks, Instruments, Rules, Best Practices and Evaluation Measures"
(this part extends this one about Knowledge Sharing)
_______________________________________________________________________________________________________
Table of Contents (to hide/reveal titles of 3rd/4th level subsections, click 3rd or 4th)
0. Input (Thing Worked On): Preference, Commitment, ...
0.1. (Dis)Satisfaction, Utility Units/Functions/Profile and Preference
0.2. Right, Duty, Contract
0.3. KB of Agent
1. Attribute for Cooperativeness and Ethics/Fairness
1.1. Ethicality/Fairness
1.2. Attribute Used As Parameter For Calculating An Ethicality
2. Situation for Cooperation Or Fair/Ethical Acts
2.0. Introduction
2.0.1. Cooperative Situation/System/Agent/Tool
2.0.2. Positive/Ethical/Fair Process
2.1. Ethical/Fair Decision Making Process
2.1.1. Social welfare function using weights to avoid tactical-voting and majority-dictatorship
2.1.2. Decision making based on logical argumentation and non-restrictive preferences
2.2. Pre-process for Decision Making: Checks (Workflow, Specs)
2.2.1. Gathering_of_Like-or-dislike-or-preference_of_agents
2.2.2. Checking_of_the_addition-or-update_of_Like-or-dislike-or-preference_of_agents
2.2.3. For Information Analysis/Synthesis
2.2.4. For Tasks/Answering To Be Performed Timely, Fully And According to Specifications
2.3. Liberty/Equity/Optimality Uncompatible/Compatible/Justified State Or Process
2.3.1. State-or-process_wrt_no_more-than-minimal_liberty-restriction
2.3.2. State-or-process_wrt_Equity
2.3.3. State-or-process_wrt_optimality_of_satisfaction_repartition
2.4. Specializations, e.g. Repartitions (Work Allocation, ...)
2.4.1. For Work Allocation
3. Axioms (Default Beliefs/Preferences, ...) for Cooperation Or Fair/Ethical Acts and Their Justifications
3.1.
3.2. Ethical Aggregation Rules
3.2.1. Ethical Aggregation Rules
3.2.2. For Fair Evaluations
3.2.3. General Informal Rules
0.1. (Dis)Satisfaction, Utility Units/Functions/Profiles and Preferences
/* Declared in
d_upperOntology.html: (ordinal/cardinal)Utility, Valuating_satisfaction,
Marginal_utility,
Like-or-dislike-or-preference_description, Preference_description, preferred_thing, preference_by */
Like-or-dislike-or-preference_description //already declared, with the following signatures:
.[0..* Attribute_type ?attrRT] //if used, this parameter should be specified first
.[0..* Agent ?a ~% ?d] ?d
\. e{
Utility_function //below
(
Convex-preferences_description
/^
Preference_description,
:= ^"individual's ordering of various outcomes, typically with regard to the
amounts of various goods consumed, with the property that, roughly speaking,
'averages are better than the extremes'; this concept roughly corresponds to
the concept of diminishing marginal utility without requiring utility
functions",
annotation: "unlike votes, preferences are independent of the
voting/aggregation method" )
}
(Like-or-dislike-or-preference_description_compatible_with
.[0..* Attribute_type ?attrRT] .[0..* Agent ?a ~% ?d] ?d
= ^(Like-or-dislike-or-preference_description !descr of: (a Situation attr_type: ! ?attrRT)),
\. Like-or-dislike-or-preference_description _(?attrRT)
)
Wish_description .
Utility_function
part:=> a
Cardinal_utility,
\. v_p{ Individual_utility_function
(Aggregate_utility_function
}
ne{ (
Additive_utility_function
:= ^"Utility function with the
Sigma_additivity",
/^ ^"Utility of a set of items such that it is the sum of the utilities
of each item separately (the whole is equal to the sum of its parts)",
\.
Utility_function_on_indivisible_goods
)
} ._(type _[.->.]: Type-set_wrt _(attribute)). //wrt relType
Attribute_of_a_utility_function attribute of:= a Utility_function,
\. ne{ (Attribute_of_an_ordinal_utility_function
\. (Monotonicity_for_an_utility_function /^
Monotonicity,
annotation: "means that an agent always (weakly) prefers to have extra items" )
Weak_additivity
)
(Attribute_of_a_cardinal_utility_function
attribute of:= a Cardinal_utility_function,
\. (
Sigma_additivity = Countable additivity,
/^ ^w#"property abstracting the intuitive properties of
size (length, area, volume) of a set",
annotation: "Sigma_additivity = Submodularity + Supermodularity" ) )
}
ne{ (
Additive_utility
\. e{ Sigma_additivity Weak_additivity } )
}
Reasonably-mandatory_attribute_of_cardinal_social_welfare_function_based_process .
0.2. Right, Duty, Contract
(Agreed_situation .[?agents, {1..* ?other_situation}, ?prefered_situation] ?as
= Situation_that_is_object_of_a_preference_by_some_agents, //always?
//no: /^ Preference_situation,
:= [a Preference_situation, agent: {2..* agent} ?agents,
input: {1..* Situation ?other_situation}
input: a Situation ?preferred_situation, output: ?preferred_situation ],
\. Commitment
(Situation_implicitly-or-explicitly_agreed_by_many_persons
= situation_that_is_a_right-or-duty_according-to_many_persons )
)
Dissatisfaction_of_an_agent_for_doing .[Agent ?ag, Process ?p] ?a /^ Attribute_expressing_a_dissatisfaction,
:= [ [?p agent: ?ag] => [ [?p attr: ?a] believer: ?a] ].
one_of_the_most_relevant_agent_for_this_process _(Process ?p, Agent ?ag)
:= [ the Dissatisfaction_of_this_agent_for_doing _(?p,?ag) value: (a value =<
(the Dissatisfaction_of_this_agent_for_doing _(?p,(each ?ag != ?a))) ) ].
//doing ?p would cost less (or same) to ?ag than to any other
//note: below, this relation is always from a ?p being an Ethicality_compatible_state-or-process_wrt _(...)
/* -----------
binding legitimate ethical fair
http://www.cnrseditions.fr/philosophie/7816-ethique-et-polemiques.html Jero^me RAVAT
Right_or_duty /^ Situation,
\. e{ (Duty
\. (Commitment_based_duty .[Agent ?ag, Process ?p, 0..1 Time ?t]
:= [?a agent of: (a Commitment object: [?a agent of: ?p, time: ?t])] )
(Duty_coming_from_being_one_of_the_most_relevant_agent_to do_something .[Agent ?ag, Process ?p]
:= [?p one_of_the_most_relevant_agent_for_this_process: ?ag] )
)
Natural-or-legal_right
}
c{ (Legal_right-or-obligation := "law/custom of a particular culture or of the egal system of
a particular government, hence created by people" )
(Natural_right-or-obligation
= Universal_right-or-obligation inalienable_right-or-obligation,
annotation: " 'Natural laws become cultural laws' or conversely ?",
\. (Human_natural_right-or-obligation
\. Right_of_the_1948-United-Nations-Universal-Declaration-of-Human-Rights )
Animal_natural-right-or-obligation )
}.
needs_based_right
contribution_based_right
meritocracy-or-elitocracy_based_right //'All are equal before the law'
// (arbitrary) inequity aversion
https://en.wikipedia.org/wiki/Equity_theory judgments of equity and inequity are derived from
comparisons between one's self and others based on inputs and outcomes. I
Classical liberalism calls for equality before the law, not for equality of outcome.[18]
Classical liberalism opposes pursuing group rights at the expense of individual rights.[19]
"Property right" = "Just acquisition" (esp. by working on unowned things)
+? "Just transfer" since voluntary (non-coerced) transactions always have a property called
--------- */
0.3. KB of Agent
/* Declared in
d_upperOntology.html: KB_about_an_agent, KB_of_an_agent, kb_of_this_agent
Personal KB of an agent ?a: Agent-s_kb
Formal/informal potential proof (PP) of a proposition ?p by an agent ?a:
set of formal/informal inferences which, from hypothesis that ?a
commits to believe (and that hence are added to his Agent-s_kb),
lead to proove ?p, at least according to ?a. If the inference
are formal, they must use inference rules in the Agent-s_kb of ?a
the proof is not valid (i.e., it does not count as a PP) if an
inference engine detects a problem in the application of the
inference rules.
No "argument", just "<=". Counter-"<=" are on premisses
or on false "<=" (e.g., "X has advantage A" => "X should be used";
correction: "X is better for A on any criteria than any other tool"
=> "X should be used")
PP-based-addition by an agent ?a to a knowledge base ?kb: addition of
statements to ?kb which, when they contradict other statements in
?kb, must be supported by a valid PP and, if so, replace them.
Used (w.r.t. ?Agent-s_kbs):
- Agent-s_kbs-majoritarily-invalidation
/approval
- Agent-s_kbs-majoritarily-invalidation-or-improvement
*/
Agent-s_kbs-maximal-logically-coherent-union of Agent-s_kbs ?Agent-s_kbs of agents ?as:
The "maximal logically coherent union" is a/the maximal part of the
union that is logical coherent. If there are many possible maximal
parts and if there are offendee(s) ?os in ?as, the one(s) maximizing
the number of rules from ?os should be used. If there are still
more than 1 possible maximal part, random selection may be used.
f-p_Agents_ownedKBs_oneOfTheirMaximalLogicallyCoherentUnion_oneThatMaximizesTheNumberOfStatementsFromOffendees
.(.{1..* Agent} ?as, .{0..* Agent} ?offendees}) //OR:
= Maximal-logically-coherent-union-of-Agent-s-kbs_one-that-maximizes-the-number-of-statements-from-offendees
.[.{1..* Agent} ?as, .{0..* Agent} ?offendees}],
//BOTH CALLABLE WITH _(), and the 2nd with ^._()
= oneOf _(f-p_KBs_onlyThoseThatMaximizeTheNumberOfStatementsFrom_(?offendees,
KBs_onlyThoseNotIncludedByOthers_(KBs_consistentSubKBs_(Agents_ownedKBs_union_(?as))) )).
f-p_KBs_onlyThoseThatMaximizeTheNumberOfStatementsFrom
.( .{1..* KB ?kb} ?kbs), .{0..* Agent} ?offendees) -% KB .{1..* KB ?maxSubKB} ?maxSubKBs
= KBs_that_maximize_the_number_of_statements_from_offendes
.[ .{1..* KB ?kb} ?kbs), .{0..* Agent} ?offendees] .{1..* KB ?maxSubKB} ?maxSubKBs,
:= [ [KB_numberOfStatementsFrom_(?kb,?offendees) >= KBs_maxNumberOfStatementFrom_(kbs,?offendees)]
=> [?kb member of: ?maxSubKBs] ].
f-p_KB_numberOfStatementsFrom_(.AND{1..* Logically-atomic_statement ?s} ?kb,
.{0..* Agent} ?offendees) -% Number ?n
:= [?n = cardinality_(^{?s believer: (an Agent member of: ?offendees)})].
f-p_KBs_maxNumberOfStatementFrom_(.{1..* KB ?kb} ?kbs, .{0..* Agent} ?offendees)
-% Number ?max = max_(^{KB_numberOfStatementsFrom_(?kb,?offendees)}).
f-p_KBs_onlyThoseNotIncludedByOthers .( .{1..* KB} ?kbs) ) -% KB .{1..* KB ?maxSubKB} ?maxSubKBs
:= [?maxSubKB member of:?kbs, part: no ^(KB member of: ?kbs, != ?maxSubKB)].
Consistent_KB .AND{1..* Logically-atomic_statement ?s} ?kb := [ ![?s =>! ?kb] ]. //:= [kif#consis(?kb)]
consistent_subKB .(KB ?kb, Consistent_KB ?consisSubKB) := [kb part: ?consisSubKB].
f-p_KBs_consistentSubKBs .( .{1..* KB} ?kbs) ) -% KB .{1..* KB ?consisSubKB} ?consisSubKBs
:= [?kbs consistent_subKB: ?consisSubKB].
f-p_Agents_ownedKBs_union .( .{1..*Agent ?a} ?as) ) -% KB ?agentsUnionKB
:= [ [?kbOfAgent owner: ?a] => [?kbOfAgent member of: ?unionKB] ].
f-p_KBs_union .( .{1..* KB ?kb} ?kbs) ) -% KB ?unionKB
:= [ [?st member of: ?kb] => [?st member of: ?unionKB] ].
Agent-s_kbs-improvement of a proposition ?p1 by a proposition ?p2
w.r.t. Agent-s_kbs ?Agent-s_kbs (or agents ?as owning these KBs):
for each ?Agent-s_kb in ?Agent-s_kbs, according to this ?Agent-s_kb (hence
preferences and conclusions from it via the rules of the Agent-s_kb),
?p2 is better than ?p1 (each Agent-s_kb may define different rules for
calculating what "better" means). Note: there may often not be enough
information in an Agent-s_kb to conclude that ?p2 is better than ?p1 for
this Agent-s_kb; in that case, the agent with this Agent-s_kb may be asked
to complete it.
f-rt-improvedProposition__TypesOfAttributeCriteria__Agents
.(.{1..* Attribute_that_is_a_criteria ?typeOfAttributeCriteria}, .{1..* Agent ?agent})
-% Relation_type ?rt .(Proposition ?p1, Proposition ?p2)
//PB?: boolean fct AND rel.gener.fct
//seems ok with the '=' (hence a way to state how a boolean function can be called as a relation)
= [.AND{?p1 f-rt-improvedProposition__TypeOfAttributeCriteria__Agent_(?typeOfAttributeCriteria,?agent): ?p2}
]. //for each ?agent, for each ?typeOfAttributeCriteria
f-rt-improvedProposition__TypeOfAttributeCriteria__Agent
.(Attribute_that_is_a_criteria ?typeOfAttributeCriteria}, Agent ?agent)
-% Relation_type ?rt .(Proposition ?p1, Proposition ?p2)
:= [ [ ?p1 => [a Thing ?x ?typeOfAttributeCriteria: 1 Measure ?v1] ]
[ ?p2 => [ ?x ?typeOfAttributeCriteria: 1 Measure ?v2] ]
[ ?v1 ?agent##"better-than"_[believer: ?agent]: ?v2]
].
f-rt_TypeOfAttributeCriteria_betterThanRelationTypeForThisCriteriaAndTheseAgents
.(Attribute_that_is_a_criteria ?typeOfAttributeCriteria, .{0..* Agent ?agent})
-% Relation_type ?rt .(?x, ?y)
:= [ [?x ?typeOfAttributeCriteria: (1 Measure "better-than"_[believer: ?agent]:
(the Measure ?typeOfAttributeCriteria of: ?y) )
] believer: ?agent ]. //implicit "each"
f-rt_TypeOfAttributeCriteria_betterThanRelationTypeForThisCriteriaAndThisAgent
.(Attribute_that_is_a_criteria ?typeOfAttributeCriteria, Agent ?agent)
-% Relation_type ?rt .(?x, ?y)
:= [ [?x ?typeOfAttributeCriteria: (1 Measure "better-than"_[believer: ?agent]:
(the Measure ?typeOfAttributeCriteria of: ?y) )
] believer: ?agent ].
1. Attribute for Cooperativeness and Ethicality/Fairness
Cooperation-related_attribute /^ Attribute_wrt_role,
\. (Cooperation_related_positive-attribute
/^ Quality_increasing_the_usability_of_the_qualified_thing,
\. (Cooperativeness .(Agent, Cooperativeness]
\. e{ Process_cooperativeness
State_cooperativeness
Agent_cooperativeness //never set directly; always computed based on the
} // cooperativeness of process performed by the agent
) ).
Ethics-related_attribute /^ Attribute_wrt_role,
\. (Ethics_related_positive-attribute /^ Positive_attribute,
\. n{ Ethicality Attribute_used_as_a_parameter_for_calculating_an_ethicality
} //'n' since common subtypes for Liberty-respecting_..., Equity and Optimality...
p{ Normative_attribute Non-normative_attribute }
e{ (Ethics-related_situation-attribute attr of:=> a Situation)
(Ethics-related_Agent-attribute attr of:=> an Agent)
} ).
1.1. Ethicality/Fairness
Ethicality =
Normative_ethicality Morality,
\. (Fairness annotation: "Not all ethics measures are about fairness, but fairness is one
particular kind of ethicality, even though
a. fairness is often (wrongly solely) associated to egalitarianism
(this type of fairness is a subtype of an egalitarianism based ethicality), thus
b. some actions/agents may be viewed as unethical (from a non-egalitarianist
viewpoint) but fair.
However,
the expression 'fair trade business' is correctly seen as 'ethical business'.
The
fairness dilemnas of procedural/distributive justice are ethical dilemna." )
(Ethicality_to_be_used //use example: [[?e type: Ethicality_to_be_used] believer: ?agent]
\. e{ (Ethicality_to_be_used_directly ?a
:= [ [^thing attr: ?a] => [ !_! ^thing] ] ) // "!_!" introduces a constraint/directive
(Ethicality_to_be_used_wrt_attribute
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_situation) ?aType] ?a
:= [ [^thing attr: ?aType] => [ !_! ^thing] ] )
} )
(Ethicality_considering-previous-ethical-or-unethical-actions-of-recipients
/^ Ethicality_to_be_used_directly __[believer: pm default] //default pref.
)
e{ (
Agent_ethicality .[Agent ?ag -% ?attr] ?attr /^ Ethics-related_Agent-attribute,
= Agent_ethicality_wrt_situation-and-attr-and-paramObject-and-evalFct-andaggregFct
.[Situation ?sit, ?attr, ?parameterObject, ?evalFct, ?aggregationFct
] ?attr, //!= Agent_ethicality_wrt_attr //see below
attr of: an Agent ?ag,
\. eP{ //"eP": "Exclusive if Pure", i.e. mixable (→ not exclusive) but the names mainly
// refer to the non-mixed types
// (note: evaluating the ethicality of a particular agent seems easy if all the ethical values
// of this agent for these next exclusive-or-not types are similar, e.g. all "very ethical")
(
Virtue-ethics_based_agent_ethicality
annotation: "(Socrates/)Aristotle identified several virtues (good habits or ways to be), e.g.
generosity, prudence, justice, temperance and courage/fortitude. In 2016,
Shannon Vallor proposed this list of 'technomoral' virtues:
Honesty (Respecting Truth), Self-control (Becoming the Author of Our Desires),
Humility (Knowing What We Do Not Know), Justice (Upholding Rightness),
Courage (Intelligent Fear and Hope), Empathy (Compassionate Concern for Others),
Care (Loving Service to Others), Civility (Making Common Cause),
Flexibility (Skillful Adaptation to Change), Perspective (Holding on to the Moral Whole)
and Magnanimity (Moral Leadership and Nobility of Spirit)" ),
Role-ethics_based_agent_ethicality
}
(Ethicality_of_an_agent_based_on_the_ethicality_of_this_action
= ?aggregationFct _( .{(1..*
Action_ethicality_wrt_situation-and-paramObject-and-evalFct
_(?sit,?parameterObject,?evalFct) ) }) )
(Ethicality_of_an_agent_based_on_the_ethicality_of_all_actions_of_this_agent
= ?aggregationFct _( .{(1..*
Action_ethicality_wrt_situation-and-paramObject-and_evalFct
_(each ^(Situation agent: ?ag),?parameterObject, ?evalFct) ) } ) )
(Ethical_agent_for-this-action_wrt_act-and-attr .[?act,
1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?aType]
\. (
Ethical_agent_wrt_act-and-attr .[?act, {Equity_for_a_fair-or-ethical_situation,
Optimality_of_satisfaction_repartition }]
agent of:=> //this "necessarily (by definition) do ... to be ethical wrt ..." is here
// a good enough representation of "should do ... to be ethical wrt ...":
(?act /^ ^(Ethicality_compatible_state-or-process_wrt_attr _(?aType)
object of: (a Committing agent: ?ag) ) )
//_that_this_agent_has_committed_to_do
(?act /^ ^(Ethicality_compatible_state-or-process_wrt_attr _(?aType)
one_of_the_most_relevant_agent_for_this_process: ?ag))
//_that_would cost less to this agent than to others
//if an act is logical/ethical (→ you loose less than you or others would
// gain, as indicated by Ethicality_compatible_state-or-process_wrt_attr)
// you "must" do it (as indicated by "agent of:=>"),
// especially if it's your job (as indicated by "most_relevant" or "committed")
// (e.g. since "your job is not to make other waste time" and "
// "something that brings more than it cost should be adopted")
// BUT, if it's not yet your job (→ voluntary act), the time frame to
// decide if ?act is the best use of your current/futur resources is till
// your death or, more precisely, till you are able to decide?
// This "BUT" point is not formally represented above because
// - it does not need to be (since it is about an absence of obligation),
// - representing it would be quite complex).
annotation: "See also Beneficence_as_ethical_attribute" ) )
)
(
State_ethicality .[State ?sit -% ?attr] ?attr
\. e{ (State_ethicality_based_on_the_process_that_created_it
.[1..* ?attr, 0..1 ?parameterObject, 0..1 ?evalFct]
\.
Action_ethicality_wrt_act-and-attr _((?act start_situation: ?sit),?attr)
Action_ethicality_wrt_act-and-attr-and-paramObject-and-evalFct
_((?act start_situation: ?sit), //or "consequence of:"
?attr,?parameterObject,?evalFct)
)
(State_ethicality_not_based_on_the_process_that_created_it
= (State_ethicality_wrt_notStartSituation-and-attr-and-evalFct .[?sit, ?attr, ?evalFct
] ?attr = ?evalFct _(?Sit)),
result of:=> (an Evaluating parameter: 1..* //see next sub-section for the next type
State_attribute_used_as_a_parameter_for_calculating_a_state_ethicality ) )
} )
(
Action_ethicality .[Process ?act -% ?attr] ?attr /^ Ethics-related_situation-attribute,
attribute of: a Process ?act,
\. (
Action_ethicality_wrt_act-and-attr .[?act, 1..* ?attr]
:= [?act type: Ethicality-or-fairness_related_state-or-process_wrt_attr _(?attr)]
) //!= Action_ethicality_wrt_attr //see below
(
Action_ethicality_wrt_act-and-attr-and-paramObject-and-evalFct
.[?act, 1..* ?attr, 0..1 ?parameterObject, 0..1 ?evalFct]
= ?evalFct _(?parameterObject)
)//!= Action_ethicality_wrt_act-and-attr-and-evalFct //see below
Action_ethicality_based_on_an_attribute_used_as_a_parameter_for_calculating_an_ethicality //= Action_ethicality_wrt_attr //
see below
v_p{ (Pure-approach_action_ethicality
/^ ^"Action_ethicality solely based on an approach not initially meant to be composed with
other approaches" )
(Mixed_approach_action_ethicality
\. n{ (Mixed-approach-action-ethicality_exploiting_probabilities_for_elements_of_each_approach
/^ ^"Action_ethicality mixing approaches based on probabilities, e.g. considering that
i. Action_deontological-ethicality and Strong_rule-utilitarianism_ethicality
are based on rules which, when followed, seem to have a rather good
probability of leading to an act that increases/maximizes the satisfaction of
the recipients of this act, and
ii. directly predicting the consequences of a particular act (as in
Act utilitarianism) is often difficult and hence leads (or would lead) to
predicted utilities with low probabilities associated to them,
it seems 'better' to use such rules - instead of predicted utilities -
at least when their utility probabilities are higher than the probabilities of
utilities based of the predicted (or would-be predicted) consequences of the act;
some rules may be seen and used as strong constraints, while other may
be seen as heuristics;
as hinted by the above use of 'better', this mixed
approach may be implemented via one of type
Mixed-approach-action-ethicality_exploiting_betterThan" )
(
Mixed-approach-action-ethicality_exploiting_betterThan_relations
/^ ^"Mixed_approach_action_ethicality where the mix is based on betterThan relations
that exist between elements (e.g. rules) of different approaches;
each of these betterThan relations should indicate the criteria apply;
the aggregation functions used for selecting elements and computing ethicalities
exploit criteria and preferences selected by the recipients of the action" )
} )
}
._(type _[.&->.]: Main_partition Type-set_wrt _(part)
) //this partition (not its items: "_[.&->.]") is first for display
eP{ //"eP": "Exclusive if Pure", i.e. mixable (→ not exclusive) but the names mainly refer to
// the non-mixed types
// (note: evaluating the ethicality of a particular action seems easy if all the ethical values
// of this action for these next exclusive-or-not types are similar, e.g. all "very ethical")
(
Action_deontological-ethicality //approximation+safeguard to Action_teleological-ethicality
/^ "Ethicality close to Strong_rule-utilitarianism_ethicality since it too is only based on
rules which, when followed, seem to have a rather good probability of leading to an act
that increases/maximizes the satisfaction of the recipients of this act;
this probability can be exploited for mixing this approach with other ones: see
Mixed-approach-action-ethicality_exploiting_probabilities_for_elements_of_each_approach",
parameter_object:=> ?act,
= (Action_deontological-ethicality_wrt_act-and-attr-and-evalFct .[?act,?attr,?evalFct]
= Action_ethicality_wrt_act-and-attr-and-paramObject-and-evalFct _(?act,?attr,?act?evalFct)
),
\. eP{ (Kantian-like_deontological-ethicality
\. Discourse-ethics_based_deontological-ethicality )
Divine-command_based_deontological-ethicality
(
Ethicality_advocated_by_the_Belmont-report-for-biomedical-safety-and-patient-rights
attr_type:=> Autonomy_as_ethical_attribute
Beneficence_as_ethical_attribute //hence Non-maleficence_as_ethical_attribute
Justice_as_ethical_attribute ) //see below
} )
(
Action_teleological-ethicality
parameter_object:=> (the Situation ?endSituation end_situation of: ?act),
= (Action_ethicality_wrt_act-and-attr-and-evalFct .[?act, ?attr, ?evalFct]
= Action_ethicality_act-and-attr-and-paramObject-and-evalFct
_(?act,?attr,?endSituation?evalFct) ),
\. (
Action_consequentialism-ethicality
\. eP{ (
Utilitarianism_ethicality
\. eP{ (
Act-utilitarianism_ethicality
/^ ^"Ethicality maximizing the
Expected utility (satisfaction/good) that
the evaluated action would accomplish according to the person who
evaluate the would-be consequences of the action, given what the
evaluator knows/believes, its reasoning capabilities, ...
(the
risk aversion of the evaluator or of the people that would be
affected by the action may also be taken into account;
a
social welfare function is often used for evaluating
utility profiles, i.e. possible allocations, based on
people's preferences or
utility functions;
generalizations of social welfare functions may also be used)",
annotation:
"the
felicific|hedonistic|hedonic|utility calculus is a way to
measure (in hedons and dolors) the utility of an action; it is based
on how intense, long, certain, speedy, fruitful and pure the
(di-)satisfactions caused by the action are"
"
Problems:
i. depends on the evaluator's knowledge and capabilities,
ii. depends on
what/whom is more taken into account
+ ethic of ultimate ends or ethic of responsibility,
iii. ignores justice, motives, obligations, duties, ...,
iv. hard to implement (
aggregation problem, unpredictability of
consequences, ...),
v. prediction duration,
demandingness objection, ...",
"Incorrect descriptions: act utilitarianism is often portrayed as
i. based on evaluations of the actual consequences of an act
(→ a posteriori) instead of predicted consequences (→ a priori),
ii. only taking into account the sum of the utilities of the
people affected by the act instead of using an ethical
aggregation rule that for example value fairness",
\. e{ Pure_act-utilitarianism_ethicality
Two-level_utilitarianism_ethicality }
(
Moral_act-utilitarianism_ethicality //pm: @@@@
/^ ^"Act-utilitarianism_ethicality that only takes into account
i. satisfactions that are are not necessarily dependent on
the disatisfaction of another, and
ii. disatisfactions that are are not necessaralily dependent on
the satisfaction of another"
\. (
Strong_moral_act-utilitarianism_ethicality
/^ ^"Moral_act-utilitarianism_ethicality where, in the definition,
the 'necessarily dependent' in replaced by 'dependent' " ) )
(
Rule-utilitarianism_ethicality
/^ "Ethicality which may or may not solely be based on rules to evaluate
the utility profiles of an action;
for a way to reconcile or
mix this approach with other ones, see Mixed-approach-ethicality\
-exploiting_probabilities_for_elements_of_each_approach",
\. e{ (Strong_rule-utilitarianism_ethicality
/^ "Ethicality close to Action_deontological-ethicality since
it too is solely based on rules which, when followed, have
a rather good probability of leading to an act that
increase/maximize the satisfaction of the recipients of
this act" )
(Weak_rule_utilitarianism_ethicality
\. Two-level_utilitarianism_ethicality)
} )
(
Motive-utilitarianism_ethicality
/^ ^"close to Virtue-ethics_based_agent_ethicality but
it is an Action_ethicality, not an Agent_ethicality" )
} )
(State-consequentialism_ethicality
/^ "values social order, material wealth and population growth" )
Anarchist-ethics_ethicality
}
eP{ (
Hedonistic-utilitarianism_ethicality /^ ^"implies a
Felicific calculus"
^"based on predicted/actual situations" )
(
Preference-utilitarianism_ethicality
:=% "values personal interests, rather than pleasure" //pragmatic approximation
/^ (^"Fair from a
negative average preference utilitarianism viewpoint process"'
/^ ^"maybe the only method that technically avoids the
mere addition paradox
and the
Nonidentity problem of
population ethics;
see also
this Theory of Justice ) )
}
e{
Negative-consequentialism_ethicality Positive-consequentialism_ethicality }
e{
Prioritarianism-consequentialism_ethicality
Egalitarianism-consequentialism_ethicality }
) //end of Action_consequentialism-ethicality
) //end of Action_teleological-ethicality
} //end of the exclusive Action_ethicality types; the next two may be a mix
Pragmatic-ethics_ethicality
Ethics-of-care_ethicality
) //end of Action_ethicality
} //end of the distinction "Agent_ethicality / State_ethicality / Action_ethicality"
._(type _[.*->.]: Main_partition Type-set_wrt _(object_natural-type)
) //this partition (not its items: "_[.->.]") is first for exploitation
(Ethicality_wrt_moral-realism |^ Type_wrt _(comparison_relation_to _(Moral-realism)),
\. e{ (Moral-realism_ethicality
/^ ^"ethicality with a meta-ethical view composed of the next 3 thesis
(reminder: in epistemology, 'knowledge' is 'justified'(iii) 'true'(ii) 'belief'(i)):
i. semantic thesis (cognitivism; belief): moral statements (i.e. judgments) have meaning
(→ can be beliefs), i.e., ethical sentences express propositions, hence
ethical sentences can be true or false,
ii. alethic thesis (truth status): some such propositions are true, and
iii. metaphysical|epistemological thesis (objectivity; justifiability): the truth or falsity
of such propositions (i.e., the metaphysical status of moral facts) refer to
objective features of the world (hence, features independent of subjective opinion),
some of which may be true to the extent that they report those features accurately
(thus, the metaphysical status of moral facts is robust and ordinary, not
importantly different from other facts about the world);
moral-realism is not necessarily exclusive with moral relativism and hence is not
necessarily based on universal values )
(Moral-antirealism_ethicality
/^ ^"ethicality with a meta-ethical view that denies one of the 3 thesis of Moral-realism,
and hence that asserts that there are no objective moral values or normative facts;
moral anti-realism is silent on the 'iii. epistemological clause': whether we are justified
in making moral judgments, e.g. an antirealist might be both
a moral realist in asserting that moral judgments are sometimes objectively true
and a moral skeptic in asserting that moral judgments always lack justification",
\. e{ (Moral-antirealism-and-skepticism_ethicality
\. e{ (Noncognitivism_ethicality
/^ ^"ethicality with a meta-ethical view that denies (→ we can never know)
the 'i. semantic thesis (cognitivism; belief)' of moral realism,
hence that denies that moral judgments can be beliefs (→ no moral
'justified true belief' → no moral 'knowledge' → moral skepticism);
in this view, moral claims are neither true or false (they are not
truth-apt), they are imperatives (e.g. 'Don't steal babies!'),
expressions of emotion (e.g. 'stealing babies: Boo!'), or
expressions of 'pro-attitudes' (e.g. 'I do not believe that babies
should be stolen')" )
(Moral-error-theory_ethicality
/^ ^"ethicality with the meta-ethical view that
II. 'ii. semantic thesis (cognitivism; belief)' of moral realism is false
(nihilism): (for some, 'all moral claims are false', for other
'neither true nor false'),
III. we have reason to believe that all moral claims are false, and
IIIbis. since we are not justified in believing any claim we have reason to
deny, we are not justified in believing any moral claims
(note: 1. all moral nihilists - alias, amoralists - agree that no moral
claim is true but some say that all moral claims are false, other say
that they are neither true nor false;
error theorists typically claim that it is only distinctively moral
claims which are false;
practical nihilists claim that there are no reasons for action of any
kind; some nihilists extend this claim to include reasons for belief;
2. moral nihilism is distinct from moral relativism, which allows for
actions to be wrong relative to a particular culture or individual;
3. moral nihilism is distinct from expressivism, according to which when
we make moral claims, 'we are not making an effort to describe the way
the world is ...' but 'we are venting our emotions, commanding others
to act in certain ways, or revealing a plan of action')" )
} )
(Ethical-subjectivism_ethicality
= Non-objectivism_ethicality Moral-antirealism-but-not-skepticism_ethicality,
/^ ^"ethicality with a meta-ethical view that denies the
'iii. metaphysical thesis (objectivity)' of moral realism: in this view,
the truth or falsity of propositions that express ethical sentences is ineliminably
dependent on the (actual or hypothetical) attitudes of people
(note: this is not moral relativism which claims that statements are true or false
based on who is saying them)" )
} )
(Moral-skepticism_ethicality
/^ ^"ethicality based on the view that no one has any moral knowledge and, for many
moral skeptics, that moral knowledge is impossible; more precisely:
II. we never know that any moral claim is true,
III. we are never justified in believing that moral claims are true (claims of the form
'state of affairs x is (morally) good', 'action y is morally obligatory', ...)",
\. e{ Moral-antirealism-and-skepticism_ethicality
(Epistemological-skepticism_ethicality
/^ ^"ethicality based on II. agnosticism on whether 'ii. all moral claims are false' and
III. we are unjustified in believing any moral claim",
^"ethicality for the cases where the moral non-objectivist accepts the existence of
moral knowledge, i.e., the non-objectivity of some fact does not pose a particular
problem regarding the possibility of one's knowing it, e.g., one might know that a
certain diamond is worth $1000",
\. e{ (Pyrrhonian-moral-skepticism_ethicality
/^ ^"ethicality with the view that that the reason we are unjustified in
believing any moral claim is that it is irrational for us to believe
either that any moral claim is true or that any moral claim is false; thus,
II. agnosticism on whether 'i. all moral claims are false', and
III. 'ii. we have reason to believe that all moral claims are false'
is false" )
(Dogmatic-moral-skepticism_ethicality
/^ ^"ethicality with the view that that II. all moral claims are false, and
III. 'we have reason to believe that all moral claims are false' is the
reason we are unjustified in believing any moral claim" )
} )
} )
}
(Moral_relativism_ethicality
/^ ^"Normative moral relativism based ethicality, hence one holding that because nobody is either
right or wrong, everyone ought to tolerate the behavior of others even when large disagreements
about morality exist; moral-relativism is not necessarily exclusive with moral realism"
)
).
Action_ethicality_based_on_an_attribute_used_as_a_parameter_for_calculating_an_ethicality
= Action_ethicality_wrt_attr .[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?aType]
// /^ Attribute_of-and-used-for_a_fair-or-ethical_process,
\. (Ethical_for_an_action_wrt_attr .[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?aT]
:= [each ?aType value: 1] )
(Action_ethicality_considering-previous-ethical-or-unethical-actions-of-recipients_wrt_attr
attr_type:=> Ethicality_considering-previous-ethical-or-unethical-actions-of-recipients
)
(
Action_ethicality_wrt_liberty
attr_type:=> Liberty_related_attribute,
annotation: "In ethics, '
liberty' is a more relevant term than '
freedom'",
\. p{ (
Action_ethicality_wrt_no_more-than-minimal_liberty-restriction
=
Action_ethicality_with_no_more-than-minimal_restriction_of_physical-or-not_choices,
:=% [ [^a attr: ?attr] => [^a type: Action_with_no_more-than-minimal_restriction] ] ),
\. (
Action_ethicality_with_no_hard-or-moral_paternalism ?attr
/^ ^"Action_ethicality not based on
hard or moral paternalistic ethics, i.e. one that
can support an action which
i. limits the liberty or autonomy of some of the action recipients
against their own informed non-paternalistic wishes (hence after these recipients are
warned about potential problems associated to these wishes and provided that these
wishes are not detrimental to other persons), and
ii. yet is
intended to promote their own good (e.g. 'Protecting their feelings');
i.e., where the action decider may deliberately goes against the informed non-paternalistic
wishes of the recipients; the expression
'more-than-minimal restriction of choices
(physical ones or not)' is here a way to include all these points and add more
since the restriction may be physical or not; this expression is indirectly defined via the
definition of Action_with_no_more-than-minimal_restriction (see it for more details)" )
\. (
Action_ethicality_with_no_more-than-minimal_restriction-or-hurt-or-envy
/^ Action_ethicality_wrt_
equity
Action_ethicality_wrt_
optimality_of_satisfaction_repartition
^"Action_ethicality not based on an ethics that can support an action which
causes 'unjustified-or-more-than-minimal_restriction-or-hurt-or-envy'; this
expression is indirectly defined via the definition of
Action_with_no_unjustified-or-more-than-minimal_restriction-or-hurt-or-envy",
/^ Ethicality_to_be_used_directly __[believer: pm default] //default pref.
)
(Action_ethicality_in_an_ethics_that_is_morally-minimal_according-to_Ruwen_Ogien
/^ ^"Ethicality based on an ethics that is 'morally minimal' in the sense given to this
expression by Ruwen Ogien (see this FAQ too), hence on three principles:
i. 'equal consideration' (→ same value to everyone's voice),
ii. 'neutrality towards conceptions of right and personal property'
iii. 'limited intervention in cases of egregious wrongs done'; these principles imply that
- 'we have no moral duty towards ourselves, only towards others',
- 'moral duties towards others can be either positive (help doing good) or negative (do no harm)'
- 'to avoid paternalism, it is better to stick to one principle: not to harm others' " )
(
Action_ethicality_based_on_a_non-morally-minimal_ethics
\. ^(Action_deontological-ethicality type: Pure-approach_action_ethicality
) //"pure" Action_deontological-ethicality
^(Strong_rule-utilitarianism_ethicality type: Pure-approach_action_ethicality) )
} )
(
Action_ethicality_wrt_equity
attr_type:=> Equity_for_a_fair-or-ethical_process )
(
Action_ethicality_wrt_optimality_of_satisfaction_repartition
attr_type:=> Optimality_of_satisfaction_repartition ) .
1.2. Attribute Used As Parameter For Calculating An Ethicality
Attribute_used_as_a_parameter_for_calculating_an_ethicality
= Ethicality-computation_parameter-attribute, /^ Ethics-related_situation-attribute,
parameter of:=> (an Evaluating result: an Ethicality),
\. e{ (Attribute_of-and-used-for_a_fair-or-ethical_situation
\. v_p{ (
Attribute_of-and-used-for_a_fair-or-ethical_process
attribute of:=> a Process )
Attribute_of-and-used-for_a_fair-or-ethical_state
} )
Attribute_of-and-used-for_a_fair-or-ethical_agent
(Attribute_of-and-used-for_a_fair-or-ethical_like-or-dislike-or-preference_description ?a
attr of:=> (a Like-or-dislike-or-preference_description descr of: (a Situation attr: ?a)) )
}
n{ (
Liberty_related_attribute
\. p{ (
Positive-liberty_related_attribute
/^ ^"Attribute about the possession of the intellectual/emotional/material resources to act" )
(
Negative-liberty_related_attribute
/^ ^"Attribute related to the protection from the arbitrary exercise of authority",
\.
No_more-than-minimal_liberty-restriction_attribute )
} )
(
Completeness-or-consistency_attribute
\. (
No_more-than-minimal_liberty-restriction_attribute
annotation: "No_more-than-minimal_liberty-restriction is implied by the union of some senses of
Equity and Optimality_of_satisfaction_repartition",
\. (Ethical_wrt_no_more-than-minimal_liberty-restriction ?e value:=> 1)
(Autonomy_as_ethical_attribute //$B"+(B liberty + optimality?
:=> "each individual should be apprised before any treatment or experiment"
"it is necessary to protect any person whose autonomy is diminished"
"it is necessary to respect the views of people who are capable of making
decisions and taking actions based on their personal views and values" )
) _[ .<= //this kind of Liberty (attribute) is a kind of consistency (attribute) and
// a kind of scalability (hence completeness) related (ethical attribute) because:
"wanting (at least minimal) liberty for oneself but not for others is ethically inconsistent"
"(at least minimal) liberty is statistically needed for ethical optimality/scalabity"
]
Equity_related_attribute //detailled below, like OSR
_[ .<= "Equity can be seen as a subtype of consistency in its most general sense" ]
(
Optimality_or_efficiency_or_slight-generalization-of-it
/*below: \. (Optimality_or_efficiency_or_slight-generalization-of-it_for_a_fair-or-ethical_process
\. Optimality_of_satisfaction_repartition ) */
) _[ .<= "Optimality can be seen as a subtype of completeness in its most general sense" ]
}.
Attribute_of-and-used-for_a_fair-or-ethical_process ?a
attribute of: 0..* Fair-or-ethical_process_wrt_attr _(?a), //defined in Section 2.1
/^ ^"Attribute of the process or of the comparison objects it uses, e.g., in the case of a
Social_welfare_function_based_process, the attribute is first for the utility profile used
by the function, or of the preference relationship between these utility profiles, and thereby
indirectly also of the function (and the procsse using that function) which seeks to fulfill
such a (normative) attribute; indeed, there is no point in duplicating attributes according to the
objects that bear them; furthermore, if the need arises, the subtypes of this type can be automatically
duplicated into more restricted versions that each make explicit the type of the attribute bearer",
parameter of: a Repartition_process,
\. e{ Attribute_of-and-used-by_an_
ordinal-only_based_fair-or-ethical_process
(Attribute_of-and-used-by_a_
cardinal-only_based_fair-or-ethical_process
\. (Completeness-or-consistency_attribute_for_a_cardinal-only_based_fair-or-ethical_process
\. (Independence_from_irrelevant_things_for_a_cardinal-only_based_fair-or-ethical_process
\. (Invariance_to_equivalent_utility_representations
\. (
Independence-of-common-scale_for_a_social_welfare_function_based_process
:= "the relation between two utility profiles does not change if both of them are
multiplied by the same scalar (e.g., the relation does not depend on whether
we measure the income in cents, dollars or thousands)" ) )
(
Symmetry_for_a_cardinal_social_welfare_function_based_process
:= "the relationship should be indifferent to permutation of numbers in the utility
profile, e.g., it should be indifferent between (1,4,4,5) and (5,4,1,4)" )
) ) )
}
n{ (
Attribute_for_bargaining-or-negociating_for_fairness_or-ethicality_purposes
\.
Nash_attribute_for_fair_bargaining )
Attribute_for_formal_non-bargaining-or-negociating-based_process_aimed_to_be_fair-or-ethical
}
n{ (
Liberty_related_attribute_for_a_fair-or-ethical_process
/^ Liberty_related_attribute,
\. (
No_more-than-minimal_liberty-restriction_attribute_for_a_fair-or-ethical_process
/^ No_more-than-minimal_liberty-restriction_attribute,
/^ Ethicality_to_be_used_wrt_attribute __[believer: pm default] )
)
(
Completeness-or-consistency_attribute_for_a_fair-or-ethical_process
/^ Completeness-or-consistency_attribute,
\.e{ (Completeness-or-consistency_attribute_for_an_ordinal-only_based_fair-or-ethical_process
\. Independence_from_irrelevant_things_for_an_ordinal-only_based_fair-or-ethical_process )
(Completeness-or-consistency_attribute_for_a_cardinal-only_based_fair-or-ethical_process
\. Independence_from_irrelevant_things_for_a_cardinal-only_based_fair-or-ethical_process )
}
No_more-than-minimal_liberty-restriction_attribute_for_a_fair-or-ethical_process
(
Equity_related_attribute_for_a_fair-or-ethical_process
/^ Equity_related_attribute )
(
Optimality_or_efficiency_or_slight-generalization-of-it_for_a_fair-or-ethical_process
/^ Optimality_or_efficiency_or_slight-generalization-of-it
)
(Independence_from_irrelevant_things_for_a_fair-or-ethical_process
\. (
Independence-of-unconcerned-agents_for_a_social_welfare_function_based_process
= Separability-of-unconcerned-agents_for_a_social_welfare_function_based_process,
:= "the relationship R should be independent of individuals whose utilities have
not changed, e.g, if in R (2,2,4) < (1,3,4), then in R (2,2,9) < (1,3,9)"
)
Independence-of-irrelevant-alternatives_for_a_social_welfare_function_based_process
) //this last attribute is also in Economic-efficiency_or_slight-generalization-of-it_...
(
Unrestricted-domain_for_a_social_welfare_function_based_process
= Universality_for_a_social_welfare_function_based_process,
:= "for any set of individual voter preferences, the social welfare function should
yield a unique and complete ranking of societal choices; thus,
i. it must do so in a manner that results in a complete ranking of preferences for
society, and
ii. it must deterministically provide the same ranking each time
voters' preferences are presented the same way" )
(
Continuity_for_a_social_welfare_function_based_process
:= "for every profile v, the set of profiles weakly better (i.e., >=) than v,
and the set of profiles weakly worse than v, are both closed sets" )
)
}
(
Attribute_involved_in_the_theorems_of_Gibbard-1973_or_Gibbard-Satterthwaite-1973_or_Arrow-1951
:=> "Gibbard's theorem states that for any deterministic process of collective decision,
at least one of the following three properties must hold:
i. The process is
dictatorial, i.e. there is a distinguished agent who can impose the outcome;
ii. The process limits the possible outcomes to two options only;
iii. The process is open to
strategic voting since, once an agent has identified their preferences,
it is possible that they have no action at their disposal that better defends these
preferences irrespective of the other agents' actions.
Gibbard-Satterthwaite theorem is the same theorem but restricted to
deterministic
ordinal systems (ranked voting) that choose a single winner.
Duggan-Schwartz's theorem also shows that for deterministic voting rules that choose a
nonempty subset of the candidates (rather than a single winner)
at least one of the following must hold:
i. The system is not anonymous (some voters are treated differently from others).
ii. The system is imposed (some candidates can never win).
iii. Every voter's top preference is in the set of winners.
iv. The system can be manipulated by either
- an optimistic voter, i.e. one who can cast a ballot that would elect some candidate
to a higher rank than all of those candidates who would have been elected if that
voter had voted honestly, or
- by a pessimistic voter, one who can cast a ballot that would exclude some candidate
to a lower rank than all of those candidates who were elected due that voter
voting strategically.
Hylland's theorem shows that
cardinal systems (range voting = score voting) either allows
strategic voting or are equivalent to randomized dictatorship (the result is non-deterministic:
the outcome may not only depend on the ballots but may also involve a part of chance).
Arrow's impossibility theorem states that no
ordinal systems (ranked voting) where voters have
three or more alternatives (options) can always satisfies these three 'fairness' criteria:
i. If every voter prefers alternative X over alternative Y, then the group prefers X over Y.
ii. If every voter's preference between X and Y remains unchanged, then the group's preference
between X and Y will also remain unchanged (even if voters' preferences between other pairs
like X and Z, Y and Z, or Z and W change).
iii. There is no 'dictator': no single voter possesses the power to always determine the
group's preference.
In other words, these systems cannot convert the ranked preferences of individuals into a
community-wide (complete and transitive) ranking while also meeting these criteria:
Independence of irrelevant alternatives (IIA), Unrestricted domain, Non-dictatorship and
Pareto efficiency (or Monotonicity with Non-imposition but this is less general since
Pareto efficiency (hence non-imposition) together with IIA does not imply Monotonicity
whereas monotonicity, non-imposition and IIA together imply Pareto efficiency).",
annotation: "Independence-of-irrelevant-alternatives (IIA) + Monotonicity + Non-imposition
implies Pareto efficiency but IIA+Pareto-efficiency does not imply Monotonicity",
\. (
Attribute_involved_in_the_theorem_of_Arrow-1951
\.
Monotonicity_for_a_social_welfare_function_based_process //or Pareto_efficiency instead
Non-imposition_for_a_social_welfare_function_based_process // of these last 2 attributes
Non-dictatorship_for_a_social_welfare_function_based_process
Independence-of-irrelevant-alternatives_for_a_social_welfare_function_based_process
Unrestricted-domain_for_a_social_welfare_function_based_process )
)
(
Nash_attribute_for_fair_bargaining
anotation: "Bargaining_from_equal_split is Nash's bargaining method respecting each
Nash_attribute_for_fair_bargaining; the fairness of this method seems to come from the
immunity against undue exploitation by the opponent as guaranteed by perfect competition"
//note: Bargaining_from_equal_split is cited below as subtype of
// Process_following_a_formal_mechanism_for_fairness_purposes
\. Pareto-efficiency
Independence-of-irrelevant-alternatives_for_a_social_welfare_function_based_process
Symmetry_for_a_cardinal_social_welfare_function_based_process
Invariance_to_equivalent_utility_representations
).
Optimality_or_efficiency_or_slight-generalization-of-it_for_a_fair-or-ethical_process
/^ Ethicality_to_be_used_wrt_attribute __[believer: pm default],
annotation: "Using
Normative decision theory is a way to identify optimal decisions, where optimality is
often determined by considering an ideal decision-maker who is able to calculate with
perfect accuracy and is in some sense fully rational",
\. (Economic_efficiency_for_a_fair-or-ethical_process
\~= { (
Distributive_efficiency_in_welfare_economics
:= "In
welfare economics, distributive efficiency occurs when goods and
services are received by those who have the greatest need for them,
based on the law of
diminishing marginal utility" )
(
Productive_efficiency_in_welfare economics
:= "no additional output can be obtained without increasing the
amount of inputs, and production proceeds at the lowest
possible average total cost" )
(
Allocative-efficiency_or_Pareto-efficiency_or_close_variant
=
Optimality_of_satisfaction_repartition OSR,
annotation: "Allocative-efficiency is not a subtype of Pareto efficiency (nor a
supertype, nor the same notion)",
"OSR is implied by the union of some senses of Equity and Liberty",
\. (
Moral_optimality_of_satisfaction_repartition = Moral_OSR,
/^ ^"Optimality_of_satisfaction_repartition based on Moral_act-utilitarianism_ethicality"
)
/* @@@ every occurrence of Optimality_of_satisfaction_repartition/OSR should probably be
replaced/specialized by Moral_optimality_of_satisfaction_repartition/Moral_OSR */
{ (
Weak_Pareto-efficiency
:= "an allocation (of positive/negative stuff) is weakly Pareto efficient
iff there is no possible alternative allocations that would cause
every recipient to gain something",
\. (
Pareto-efficiency = Strong_Pareto-efficiency Pareto_optimality,
=
Pareto_unanimity,
:= "an allocation (of positive/negative stuff) is Pareto efficient iff
it is impossible to make a 'Pareto improvement' about it, i.e.
impossible to reallocate so as to make a preference criterion
better without making another preference criterion worse",
"if every individual prefers a particular option to another,
then so must the resulting societal preference order;
this, again, is a demand that the social welfare function will be
minimally sensitive to the preference profile",
annotation:
"an inequitable situation (e.g., one where one agent has everything)
may be Pareto efficient because a change would worsen the
preference of the originally advantaged one"
"the
liberal paradox shows that when people have preferences
about what other people do, the goal of Pareto_efficiency
can come into conflict with the goal of individual liberty"
"Pareto-efficiency can exploit ordinal utility",
\. Group-envy-freeness ) ) //defined above
(
Allocative_efficiency
:= "an allocation (of pos./neg. stuff) is allocative efficient iff
it is impossible to make an Allocative-efficiency_improvement
about it, i.e. impossible to reallocate so as to make gainers
gain more than losers lose, e.g. as when marginal benefit to
consumers is equal to the marginal cost of producing, or
as when the skill demanded by a contrat offering party is the
same as the skill of the agreeing party",
annotation: "an Allocative-efficiency_improvement allows some loss,
not a Pareto_improvement, thus neither Pareto_efficiency and
Allocative_efficiency are subtypes of one another" )
} )
} ) //end of Economic_efficiency
(
Monotonicity_for_a_social-welfare-function_based_process
=
Positive_association_of_social_and_individual_values,
:= "if the utility of an individual increases while all other utilities remain equal,
the relationship should strictly prefer the second profile, e.g., it should prefer
the profile (1,4,4,5) to (1,2,4,5)"
"if any individual modifies his/her preference order by promoting a particular option,
then the societal preference order should respond only by promoting that same option
or not changing, never by placing it lower than before; an individual should not be
able to hurt an option by ranking it higher",
annotation: "this monotonicity is not the
monotonicity on an agent's preferences
(which is a subtype of
local nonsatiation of an agent's preferences)",
\.
Pareto_efficiency )
(
Non-imposition_for_a_social_welfare_function_based_process
= Citizen-sovereignty_for_a_social_welfare_function_based_process,
:= "every possible societal preference order should be achievable by some set of
individual preference orders; this means that the social welfare function is
surjective: it has an unrestricted target space",
\.
Pareto_efficiency )
(
Independence-of-irrelevant-alternatives_for_a_social_welfare_function_based_process
:= "the social preference between x and y should depend only on the individual
preferences between x and y (pairwise independence); more generally, changes in
individuals' rankings of irrelevant alternatives (ones outside a particular subset)
should have no impact on the societal ranking of the subset, e.g., the introduction
of a third candidate to a two-candidate election should not affect the outcome of
the election unless the third candidate wins",
\.
Pareto_efficiency
).
Equity_related_attribute_for_a_fair-or-ethical_process
/^ Ethicality_to_be_used_wrt_attribute __[believer: pm default],
\. p{ (
Equity_for_a_fair-or-ethical_process
annotation: "Equity might be equivalent to the union of some senses of
Liberty and Optimality_of_satisfaction_repartition",
"The
Price of fairness is quantitative measure of the loss of
egalitarian /
utilitarian welfare that society has to take
in order to guarantee fairness utilitarian social welfare",
\. e{ (
Equality_based_equity_for_a_fair-or-ethical_process
/^ "Equity where the people affected by the process receive the same thing
regardless of their efforts or preferences",
\. (Biomedical-justice_as_ethical_attribute
annotation: "from the Belmont report",
= ^"attribute referring to the obligation to treat each person fairly and equally,
whether in benefits or in risks",
:=> "resources, efforts, contributions and merits should be equally allocated
to people no matter their race, religion, gender, etc." )
)
(
Non-equality_based_equity_for_a_fair-or-ethical_process
\. (Need_based_equity_for_a_fair-or-ethical_process
:=> "People in the group with the greatest needs are provided with the necessary
amount of resources required to meet those needs" )
(Effort_based_equity_for_a_fair-or-ethical_process
:=> "The members' outcomes are based on their inputs to the group effort.
Someone who has given more time, money, energy, risk, or other input,
should receive more than someone who has contributed less" )
(Risk-or-responsibility-or-risk_based_equity_for_a_fair-or-ethical_process
:=> "Those who have risked more or had higher responsibilities receive more" ) )
}
(Non-maleficence_as_ethical_attribute
= ^"attribute referring to the obligation to not intentionally harm people",
\. (Beneficence_as_ethical_attribute
= ^"attribute referring to the obligation to consider people's best interests and act
so as to increase their welfare; this attribute is implied by the union of some
senses of Equity and Optimality of satisfaction repartition (OSR), as illustrated
by some subtypes of this type" ) )
)
(Attribute_used_for_calculating_equity_for_a_fair-or-ethical_process
\. (
Non-dictatorship_for_a_social_welfare_function_based_process
:= "The social welfare function should account for the wishes of multiple voters;
it cannot simply mimic the preferences of a single voter" )
(
Pigou-Dalton-attribute_for_a_social_welfare_function_based_process
/^ ^"attribute allowing one to know whether the function/order prefers allocations that
are more equitable, e.g., a transfer of utility from the rich to the poor is
desired as long as it does not bring the rich to a poorer situation than the poor" ) )
}
e{ Equity_related_attribute_for_an_
ordinal-only_based_fair-or-ethical_process
Equity_related_attribute_for_a_
cardinal-only_based_fair-or-ethical_process
}
(Equity_related_attribute_for_a_fair-or-ethical_
allocation_or_
repartition
\. p{ (
Equity_for_a_fair-or-ethical_allocation_or_repartition
/^
Equity_for_a_fair-or-ethical_process,
\. e{ (
Equality_based_equity_for_a_fair-or-ethical_allocation_or_repartition
/^ Equality_based_equity_for_a_fair-or-ethical_process,
\. (
Fairness_equality-for-equity
= Economic_equality Economic_equitability Economic_fairness,
:= "an allocation (of a positive/negative stuff to recipient agents) is equitable
iff for each recipient his value for his share is equal to any other
recipient's value for this other recipient's share",
/^ ^"
Equality of all recipients's values for all shares",
:=> "for more than 2 recipients, a division cannot always both be
equitable and envy-free",
\. (
Equality_based_equity_for_a_distribution ?meas /^ Measure,
:= [ [a Process attr: ?meas, input: ^i, recipient: {1..* ^r ^r2} ^rs,
result_state: [^r owner of: (^share part of: ^i,
value: value-for(^r,^share))]
] <=>
[value-for(^r,^share owner:^r) = value-for(^r2,^share2 owner:^r2)]
] ),
(
Proportional_equitability
:=> "for more than 2 recipients, a division can be
proportional-equitable and envy-free" ) ) )
(
Non-equality_based_equity_for_a_fair-or-ethical_allocation_or_repartition
/^ Non-equality_based_equity_for_a_fair-or-ethical_process,
\. (
Consensuality_for_a_fair-or-ethical_allocation_or_repartition
\. (
Agreed-value-for-each-share_based_equity
:= "an allocation (of input pos./neg. stuff) is an exact-division iff
for each share, its value is the same according to each recipient",
:=%
"Exact-division equalizes each recipients's values for same shares",
\. (
Exact_division_based_equity ?meas /^ Measure,
:= [ [a Process attr: ?meas, input: ^i, recipient: {1..* ^r} ^rs,
result_state: [^r owner of: (^share part of: ^i,
value: value-for(^r,^share))]
] <=>
[value-for(^r,^share owner:^r) = a same value ?v] ],
\. (Perfect_division_based_equity ?meas
:= [ [a Process attr: ?meas, input:^i, recipient:{1..* ^r} ^rs,
result_state: [^r owner of: (^share part of: ^i,
value: value-for(^r,^share))]
] <=> [value-for(^r,^share owner:^r) = a same Value ?v
= value-for(^i,^r) / n]
] ) ) )
(
Near-exact-division_based_equity //kind of generalization of the previous
\. (Measure_of_near-exact_division ?meas /^ Measure,
:= [ [a Process attr: ?meas, input: ^i, recipient: {1..* ^r} ^rs,
result_state: [^r owner of: (^share part of: ^i,
value: value-for(^r,^share)) ]
] <=> [value-for(^r,^share owner:^r) < a same Value ?v] ] ) )
(
Equal-treatment-of-equals-in-welfare_based_equity
/^ ^"allocation (of input pos./neg. stuff) which is directly envy-free or
becomes it when the used compensation (monetary transfers, ...) are taken
into account in the value functions of the recipients",
\. (
Envy-freeness_based_equity
:= "an allocation (of input pos./neg. stuff) is envy-free iff for each
recipient his value of his share is superior or equal to his value for
each other share",
:=% "
Envy-freeness equalizes each recipient's value for each share",
\. (Envy-free_based_equity ?meas /^ Measure,
:= [ [a Process attr:?meas, input:^i, recipient:{1..* ^r ^r2} ^rs,
result_state: [^r owner of: (^share part of: ^i,
value: value-for(^r,^share))]
] <=>
[value-for(^r,^share owner:^r) >=
value-for(^r,^share2 owner:^r2)] ],
annotation: "for
allocation rules in particular queueing problems,
Envy-freeness = Pareto-efficiency + Strategy-proofness +
Attribute_for_equal-treatment-of-equals-in-welfare" )
(
Group-envy-freeness_based_equity = Coalition-fairness_based_equity,
:= "an allocation (of input pos./neg. stuff) is group-envy-free iff
within any possible group of same size, each recipient does not
value more what he would have had with another group" ) ) )
) )
} )
(Attribute_used_for_calculating_equity_for_a_fair-or-ethical_allocation_or_repartition
/^ Attribute_used_for_calculating_equity_for_a_fair-or-ethical_process,
\. { (
Proportionality_for_
fairness
:= "an allocation (of input pos./neg. stuff) is proportional iff, for each recipient,
his value of his share is at least 1/n of his value for the total input",
\. (
Measure_of_proportional_division ?meas /^ Measure,
:= [ [a Process attr:?meas, input:^i, recipient:{1..* ^r} ^rs,
result_state:[^r owner of:(^share part of:^i, value: value-for(^r,^share))]
] <=>
[value-for(^r,^share owner:^r) >= value-for(^r,^i) / n] ] ),
annotation: "proportionality is compatible with individual rationality of
Rational_choice_theory"
"When all valuations are additive set functions,
- with 2 recipients, proportionality and envy-freeness are equivalent,
- with at least 3 recipients, envy-freeness implies proportionality" )
(
Super-proportionality_for_fairness
:= "an allocation (of input pos./neg. stuff) is super-proportional iff
for each recipient his value of his share is more than 1/n of his value for
the total input",
:=> "not all partners have the same value measure"
annotation: "when not all partners have the same value measure and the valuations
are additive and non-atomic, a super-proportional allocation exists",
\. (Measure_of_super-proportional_division ?meas /^ Measure,
:= [ [a Process attr:?meas, input:^i, recipient:{1..* ^r} ^rs,
result_state:[^r owner of:(^share part of: ^i,
value: value-for(^r,^share)) ]
] <=> [value-for(^r,^share owner:^r) > value-for(^r,^i) / n] ] ))
(
Non-bossiness
:= "if a recipient's change in her announcement does not affect her share,
then it should not affect any other recipient's share" )
(
Fairness_symmetry
:= "recipients with equal costs should be treated symmetrically, i.e. if there
is another allocation in which two recipients exchange their shares and the
other recipients keep theirs, then this allocation should be selected"
\. (
Fairness_anonymity
:= "the process should not need to know who the recipients are" ) )
} )
} ).
2. Situation for Cooperation Or Fair/Ethical Acts
2.0. Introduction
Cooperation_related_situation
\. /* these first subtype relations were already asserted in
Section 1.1.1 of the file for Section 1 (→ about Knowledge Sharing):
Information-sharing_related_situation
(Representing_knowledge \. Representing_knowledge_for_a_particular_application
Representing_knowledge_for_knowledge-sharing
) */
(Cooperative-system_situation
agent: (1..* Agent agent of: a same Cooperative-system_situation),
\. (Cooperation,
:= ^"voluntary/involuntary set of interactions (between agents) that combine
efforts of the agents of these interactions",
\. (Collaboration := ^"cooperation between voluntary agents to achieve at
least one shared goal, e.g. a projet"),
part: 1..* Information-sharing_related_situation //defined in Section 1.0
0..* Information analysing_or_synthesizing
0..* Decision_making, //detailed in
Section 4
)
p{ (Very-poorly_cooperative_situation_of_a_cooperative-system
attribute:=> a 0-20pc_fit Situation_cooperativeness )
(Poorly_cooperative_situation_of_a_cooperative-system
attribute:=> (a Situation_cooperativeness value: (a real minE: 0.2, maxNE: 0.8)) )
(Cooperative_situation_of_a_cooperative-system
attribute:=> a 80-100pc_fit Situation_cooperativeness,
\. (Optimally-cooperative_situation attribute:=> a 100pc_fit Situation_cooperativeness)
Scalable_cooperative_situation /* defined in Section 1.1.1 */ )
} ).
Cooperative_system := ^(set member: (1..* Agent agent of: a same Cooperative-system_situation)).
Agent_that_is-or-was-or-will-be_engaged_in_a_cooperation := ^(Agent agent of: a Cooperation),
\. p{ (Very-poorly_cooperative_agent attribute:=> a 0-20pc_fit Cooperativeness )
(Poorly_cooperative_agent
attribute:=> (a Cooperativeness value: (a real minE: 0.2, maxNE: 0.8)) )
(Cooperative_agent
attribute:=> a 80-100pc_fit Cooperativeness,
\. (Optimally-cooperative_agent attribute:=> a 100pc_fit Cooperativeness) )
}.
Situation_showing_that_the_agent-or-would-be-agent_has_a_negative_attribute
/^ (Situation_showing_that-the_agent-or-would-be-agent_has_a_particular_attribute
/^ Situation_with_a_relation_to_an_attribute ),
\. Act_showing_irresponsibility_from_the_part_of_its_agent
Absence-of-act_showing_irresponsibility_from_the_part_of_its_woud-be_agent
Act_showing_unreliability_from_the_part_of_its_agent
Absence-of-act_showing_unreliability_from_the_part_of_its_woud-be_agent
Act_showing_unfairness_from_the_part_of_its_agent
Absence-of-act_showing_unfairness_from_the_part_of_its_woud-be_agent
Act_showing_uncooperativeness_from_the_part_of_its_agent
Absence-of-act_showing_uncooperativeness_from_the_part_of_its_woud-be_agent .
Cooperation_tools //See also coop_fr.html#idealCollaborativeTool including 5, e.g.,
// "permettre a` ses utilisateurs de changer les re`gles par de'faut de manie`re collaborative"
2.0.2. Positive/Ethical/Fair Situation
Globally-positive-situation_according-to .[ {1..* Agent ?ag} ?ags] ?p /^ Situation,//= State or Process
:= [ [?p type: a ^(Type /^ Globally-positive-situation_according-to)] //wrt at least one subtype
¬°[?p !type: a ^(Type /^ Globally-positive-situation_according-to)]//closed-not not for possible others
],
\. e{ (Globally-positive-situation-wrt-attributes_according-to .[ {1..* Agent ?ag} ?ags] ?p
:= [ [?p attr: Globally-positive //?ags is distributive, @.?ags would be cumulative
] believer: f_percentage-necessary-for-adoption_according-to _(?ags) of ?ags
],
\. Globally-fair-or-ethical-situation-wrt-attributes_according-to )
(Globally-positive-situation-wrt-predicted-consequences_according-to .[ {1..* Agent ?ag} ?ags] ?p
:= [ [?p consequence _[inferencing: closed_world //not just: <= the union of KBs of ags
]: a Globally_positive Situation
] believer: f_percentage-necessary-for-adoption_according-to _(?ags) of ?ags
],
\. Globally-fair-or-ethical-wrt-predicted-consequences_according-to )
}
(Globally_positive_according-to_most = ^(Globally-positive-situation_according-to _(most Agent))) .
Ethicality-or-fairness_related_state-or-process
= Ethicality-or-fairness_related_state-or-process_wrt_attr //used in
Section 2.3
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_situation) ?aType], /^ Situation,
\. v_p{ (Ethicality-or-fairness_related_state /^ State,
\. (Ethicality-or-fairness_related_satisfaction-or-disatisfaction
descr: a Like-or-dislike-or-preference_description )
(Ethicality-or-fairness_related_process /^ Process)
}
e{ (Unethical-or-unfair_state-or-process
\. n{ (Unethical-or-unfair_state-or-process_wrt_attr
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_situation) ?aType]
attr:=> 1..* ^(Type =>! each ?aType value: less than 0.5)
1..* Negative_attribute,
\. v_p{ Unethical-or-unfair_process_wrt_attr
(State_that_is_unethical-or-unfair_wrt_attr /^ State,
attr: a State_ethicality _(?aType),
\. e{ (Satisfaction_that_is_unethical-or-unfair_wrt_attr ?s
descr:= an Atomic_positive-liking_description,
:= [?s consequence of: an Unethical-or-unfair_process_wrt_attr _(each ?aType)]
"a satisfaction is unethical wrt some criteria
iff 'making this satisfaction happen' is unethical wrt these criteria" )
(Disatisfaction_that_is_unethical-or-unfair_wrt_attr
descr:= an Atomic_positive-liking_description,
:= [!s consequence of: an Unethical-or-unfair_process_wrt_attr _(each ?aType)]
"a disatisfaction is unethical wrt some criteria
iff 'making this disatisfaction not happen' is unethical wrt these
criteria" )
} )
}
c{
//really complete?
Unethical-or-unfair_state-or-process_wrt_no_more-than-minimal_liberty-restriction //2.3.1
Unethical-or-unfair_state-or-process_wrt_
equity //2.3.2
Unethical-or-unfair_state-or-process_wrt_optimality_of_satisfaction_repartition //2.3.3
}
(
Not_trying_to_prevent_an_unethical-or-unfair_state-or-process_wrt_attr
//indirectly a default preference vis default ones on fairness and hence unfairness
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_situation) ?aType,
0..* Process ?unpreventedUnethicalProcess, 1..* Agent ?ag] ?nSit
:= [?nSit = [?ag !agent of: (a Process consequence: !?unpreventedUnethicalProcess)]] )
)
(Unethical-or-unfair_process_according-to_wrt_attr .[1..* Agent ?ag,
1..* ^(Type /^ Action_ethicality) ?ert]
attr:=> (an Attribute type: (a Type exclusion: each (Type /^ ?ert))
) __[believer: .ag] )
} )
(Ethicality-or-fairness_related_state-or-process_that_is_not_unethical-or-unfair
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_situation) ?aType]
= Ethicality_
compatible_state-or-process_wrt_attr, //specialized in Section 2.3
\. v_p{ (Ethicality_compatible_state_wrt_attr /^ State
\. e{ Ethicality_compatible_satisfaction_wrt_attr
Ethicality_compatible_disatisfaction_wrt_attr } )
(Ethicality_compatible_process_wrt_attr /^ Process)
}
e{ Ethicality-or-fairness_related_state-or-process_that_is_neither_ethical_nor_unnethical
Fair-or-ethical_state-or-process_wrt_attr
}
(Process_aimed-to-be_or_to-enable_a_fair-or-ethical_act
\. e{
Process_aimed_to_be_fair-or-ethical
(Process_not-aimed-to-be-fair-or-ethical_but_enabling_a_process-aimed-to-be-fair-or-ethical
part of:=> 1..* Process_aimed_to_be_fair-or-ethical )
}
._(type _[.->.]: Type-set_wrt _(part_of_supertype))
) )
}
.
Process_aimed_to_be_fair-or-ethical .[1..* ^(Type /^ Action_ethicality) ?attrType] ?p /^ Process,
attr:=> 1..* attrType,
/^ Cooperation_related_process, //e.g., all agents ?a must share all relevant info, ...
experiencer: 1..* Agent ?e, //not just Agent_able_to_experience_satisfaction_or_dissatifaction
// since i. software agents represent people, ii. fair, not just ethical
agent: each ?a,
\. (Fair-or-ethical_process_wrt_attr
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?aType]
attr:=> (each ?aType value: 1),
\. (Globally-fair-or-ethical-process_according-to .[ {1..* Agent ?ag} ?ags] ?p
:= [ [?p type: a ^(Type /^ Globally-fair-or-ethical-process_according-to)
] //wrt at least one subtype
¬°[?p !type: a ^(Type /^ Globally-fair-or-ethical-process_according-to)
] //closed-not not
], // for possible others
\e { Globally-fair-or-ethical-process-wrt-attributes_according-to
Globally-fair-or-ethical-wrt-predicted-consequences_according-to
} ),
(Fair-or-ethical_process-considering-previous-ethical-or-unethical-actions-of-recipients_wrt_attr
attr:=> an Action_ethicality_considering-previous-ethical-or-unethical-actions-of-recipients_wrt_attr
)
(
Fair-or-ethical_process_wrt_no_more-than-minimal_liberty-restriction
= Fair-or-ethical_process_wrt_attr _(Ethicality_wrt_no_more-than-minimal_liberty-restriction)
Action_with_no_more-than-minimal_liberty-restriction,
/^ ^"Action ?a that does not restrict any of its 'recipient/experiencer conscious agents' ?cas'
or only within the following constraints:
i. (for the duration necessary) to inform some of the ?cas of negative consequences of an
action that they are about to do and that they seem unaware of (
soft paternalism),
ii. to improve the global welfare the ?cas, according to each
Minimally-restrictive_value/preference and
Minimally-restrictive_welfare_aggregation_function of these recipients/experiencers
(note: this is more constrained than
'welfare paternalism' and is exclusive with
moral paternalism);
auto-paternalism (paternalism to future self), more precisely,
more-than-minimal self-restriction, is when a person
* binds himself - or allows someone to bind him - in a way he cannot unbind himself
(in a 'pseudo-restriction', the person is meant to be able to unbind herself), or
* harm himself - or allows someone to harm him - with long-term consequences, hence
possibly ignoring the preferences of its future self (unless there is a proof that
the harm result in a current gain that outweights future self preferences);
this does not cover suicide since the future self is not supposed to exist;
in many countries, allowing someone to bind/kill oneself is not legal: no legal agreement
can cover this",
consequence of:=> a Decision-making_of_action_with_no_more-than-minimal_restriction,
\. p{ Action_with_no_liberty-restriction
(Minimally-restrictive_process /^ Restriction_of_an_agent,
\. p{ (Minimal_self-restriction
= ^(Restriction_of_an_agent agent: an Agent ?a, object: ?a),
annotation: "details are in the informal definition of
Action_with_no_more-than-minimal_liberty-restriction",
\. Pseudo_self-restriction Successful_suicide_with_at-most-moderate_pain
^"Agreeing for some else to make decisions for oneself as long as the decisions
are not against oneself's strong preferences or have long-term consequences
that are against a possible strong preference of a future self" )
(Minimal_restriction_of_another
= ^(Restriction_of_an_agent agent: an Agent ?a, object: (an Agent != ?a)),
\. (Minimally-restrictive_forbidding
annotation: " 'Forbidding any Forbidding' does not make sense but
'Forbidding any more-than-minimally-restrictive_forbidding'
can/does " )
(Minimal_restriction_to_prevent_someone_to_commit_an_unethical_act_wrt_attr
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_situation)
?aType] ?r
= ^"Restriction_to_prevent_someone_to_commit_an_unethical_act
which is minimal (and hence fair/ethical) because the disatisfaction
it inflicts on the someone is inferior or equal to the disatisfaction
it would have inflicted on someone else",
:= [ [?r object: ?a1, consequence: [?a1 experiencer of: (an
Ethicality_compatible_disatisfaction_wrt_attr _(?aType) value: ?a1v)] ]
[ [! ?r] => [?a1 agent of: (a Restriction_of_an_agent object: ?a2,
consequence: [?a2 experiencer of: (an
Ethicality_compatible_disatisfaction_wrt_attr _(?aType) value: ?a2dv)] )]
]
[?a1v =< ?a2v] ] ) )
} )
}
(
Action_with_no_more-than-minimal_restriction-or-hurt-or-envy
/^ Fair-or-ethical_process_wrt_attr _({Ethicality_wrt_no_more-than-minimal_liberty-restriction,
Equity_for_a_fair-or-ethical_process,
Optimality_of_satisfaction_repartition} ),
attr:=> an Action_ethicality_with_no_more-than-minimal_restriction-or-hurt-or-envy,
consequence of:=> a
Decision-making_of_action_with_no_more-than-minimal_restriction-or-hurt-or-envy )
)
(
Fair-or-ethical_process_wrt_equity
= Fair-or-ethical_process_wrt_attr _(Equity_for_a_fair-or-ethical_process) )
(
Fair-or-ethical_process_wrt_optimality_of_satisfaction_repartition
= Fair-or-ethical_process_wrt_attr _(Optimality_of_satisfaction_repartition),
:=>
//as a representation of "any equitable optimal process should be such that ... "
![?p better-than_wrt_attr _( {1..* aType}): (a Process ?p2 != ?p)],
//i.e., ?p does (→ here, should) not have a better-than relation for each of the ?aType )
)
n{
Decision-making_process_aimed_to_be_fair-or-ethical //detailed below //main group
(Fair-or-ethical_process_resulting_from_a_decision-making_process
consequence of @:=> (a Fair-or-ethical_decision-making_process agent: 1..* Agent ?a)
) // '@': subtypes generated incrementally for navig even if no subtype/instance
// '@' is not needed if there is at least 1 non-generated subtype/instance
} // with a Preference-based_minimally-restrictive_welfare-efficient_decision-making_process
// the following partition should not matter for the recipients/experiencers ?e of this process
p{ (Fair-or-ethical_process_where_all_recipients_are_co-deciders
\. Bargaining-or-negociating_for_fairness-or-ethicality_purposes
Consensus_based_decision-making )
(Fair-or-ethical_process_where_not_all_recipients_are_co-deciders
:=> "the decider(s) must ..." ) //even for justice
}
._(type _[.->.]: Type-set_wrt _(part_or_method))
n{ Evaluating_aimed_to_be_fair-or-ethical Allocation-or-repartition_aimed_to_be_fair-or-ethical
(
Justice /^ ^"Reparation of some wrong-doing",
\.
Distributive_justice Restorative_justice Retributive_justice
(
Procedural_justice
/^ Bargaining-or-negociating_for_fairness_or-ethicality_purposes
^"Resolving_a_conflict_or_dividing-benefits-or-burdens" )
)
}
._(type _[.->.]: Type-set_wrt _(role)) .
2.1. Ethical/Fair Decision Making Process
Decision-making_process_aimed_to_be_fair-or-ethical
/^ ^"Gathering of the alternatives and then choice between them",
agent: 1..* ^"Agent that, before making the final decision, is aware the alternatives ..." ?decider,
consequence: (a Process agent: 1..* Agent ?ag, experience: 1..* Agent ?e),
\. (Fair-or-ethical_decision-making_process_wrt_attr
.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?aType]
\. (Fair_decision-making_process_considering-previous-ethical-or-unethical-actions-of-recipients_wrt_attr
/^ Fair-or-ethical_process-considering-previous-ethical-or-unethical-actions-of-recipients_wrt_attr )
n{ (Fair-or-ethical_decision-making_process_wrt_
liberty
/^ Fair-or-ethical_process_wrt_liberty,
\. (
Decision-making_of_action_with_no_more-than-minimal_liberty-restriction,
/^ Action_ethicality_wrt_no_more-than-minimal_liberty-restriction,
\. (
Decision-making_of_action_with_no_more-than-minimal_restriction-or-hurt-or-envy
\. n{ //not exclusive and, actually, preferably used together:
Decision-making_wrt_argumentations_and_selected-ethicalities-conform-preferences
//.[1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?aType]
Decision-making_based_on_preferences_and\
using_weights_to_avoid_tactical-voting_and_majority-dictatorship
} ) ) )
(Fair-or-ethical_decision-making_process_wrt_
equity
/^ Fair-or-ethical_process_wrt_Equity_for_a_fair-or-ethical_process )
(Fair-or-ethical_decision-making_process_wrt_
optimality_of_satisfaction_repartition
annotation: "Using
Normative decision theory is a way to identify optimal decisions, where
optimality is often determined by considering an ideal decision-maker who is
able to calculate with perfect accuracy and is in some sense fully rational",
/^ Fair-or-ethical_process_wrt_optimality_of_satisfaction_repartition )
} )
p{ Decision-making_process_aimed_to_be_fair-or-ethical_not_based_on_a_formal_mechanism
(
Decision-making_process_aimed_to_be_fair-or-ethical_based_on_a_formal_mechanism //attr: Section 1.1
annotation: "as illustrated in
'On Dividing Justly' [Yaari & Bar-Hillel, 1983],
the results of formal mechanisms for fairness purposes can often differ
from human based decisions that take into account ethical judgements or
moral intuitions; hence, for each situation, the choice of the mechanism
to apply should depends on the criteria it takes into account and satisfy
in that situation",
\. p{ (Formal_bargaining-or-negociating_for_fairness_or-ethicality_purposes
/^ Bargaining-or-negociating_for_fairness_or-ethicality_purposes,
\.
Bargaining_from_equal_split Bargaining_from_Zero
Bargaining_Over_the_Strong_Pareto_Set )
(Formal_non-bargaining-or-negociating-based_process_aimed_to_be_fair-or-ethical
\.
Competitive_equilibrium_from_equal_split
(
Maximin_based_process parameter: 0..1 Minimum_based_social-welfare_function )
(
Largest_equally_distributed_pairwise_pivotal_rule_for_queueing_problems
annotation: "
for particulars queueing problems, this is the only rule that
satisfies Pareto-efficiency, equal treatment of equals in
welfare, symmetry and strategy-proofness" )
(Formal_fair-or-ethical_allocation-or-repartition_decision-making_process
consequence: 0..* Allocation-or-repartition_aimed_to_be_fair-or-ethical,
\. e{ (
Social_welfare_function_based_process
\. Social_welfare_function )
(
Process_using_a_social_welfare_function_based_process //Utilitarianist process
part_or_parameter:= 1..* Social_welfare_function )
} )
) )
} )
}
._(type _[.->.]: Type-set_wrt _(method))
Social_welfare_function //see Section 1 for its attributes
annotation: "explicitly or implicitly temporary calculates and/or uses
a Total_order_relationship_on_utility_profiles", //see below
parameter: 0..1 Total_order_relationship_on_utility_profiles, //see below
result: 1..* (
Utility_profile
/^ Information_object,
:= ^"(Description of a) Set of cardinal utility values, each being the value of a
possible allocation of some positive/negative stuff to a particular recipient;
thus, the set represents the value that possible allocation for its recipients" )
\. e{ (Social-welfare_function_for_a_given_set_of_individual_preferences_or_welfare_rankings
= Bergson-Samuelson_social-welfare_function )
(Rule_based_social-welfare_function_for_possible_sets_of_individual_preferences_or_welfare_rankings
= Arrow_social-welfare_function )
}
e{ Ordinal_social-welfare_function
(
Cardinal_social-welfare_function
\. (Total_based_cardinal_social-welfare_function
\. (Utilitarian-or-Benthamite_cardinal_social-welfare_function
/^ ^"Cardinal_social-welfare_function summing the individual incomes" ) )
Average_based_social-welfare_function
(Minimum_based_social-welfare_function
\. (Max-Min-or-Rawlsian_social-welfare_function
annotation: "e.g. based on the Gini index, Atkinson's indexes or Theil index" ) ) )
}
Social_welfare_function_using_weights_to_avoid_tactical-voting_and_majority-dictatorship . //right below
2.1.1. Social welfare function using weights to avoid tactical-voting and majority-dictatorship
Social_welfare_function_using_weights_to_avoid_tactical-voting_and_majority-dictatorship
attr: an Ethicality_to_be_used_directly __[believer: pm default], //default pref.
consequence:=> (a Process_aimed_to_be_fair-or-ethical ?consequenceProcess
experiencer: {1..* Agent ?e} ?experiencers),
input:=> a Like-or-dislike-or-preference_description _(each ?e) ?likesDescr //→ ThisKB as global param.
//parameter: a Weight ?w, //see commented out details in the HTML source
annotation: "i. to avoid tactical voting, this process normalizes the weights that exist on its inputs so that
each ?e should have the same 'maximum utility value for the preferences which, for the current
vote, are relevant and contradictory (e.g. an utility of '++++' for 'a water temperature of at
least 26 degrees" and '+++' for 'economy of energy' when the temperature of a shared pool has
to be decided);
ii. to avoid majority-dictatorship, this process gives more weight to exceptional
pain (negative utilities) - exceptional in intensity and rarity - than to
non-exceptional satisfactions (positive utilities);
to do that, the aggregation function that calculates 'the global utility of a decision (i.e.
an utility profile)' can for example use a formula such as the next ones
(where N is the number of people taken into account):
* sum( eachIndividualPositiveUtility / N, eachIndividualNegativeUtility / 2 )
with this formula, pleasure is worth 2/N as much as much as pain;
* sum( eachIndividualPositiveUtility / N, eachIndividualNegativeUtility)*2/(N*(1-y) )
→ an individual pleasure of x% is worth x/2*(1-y) as an individual pain of y%
i.e., 1. an individual pleasure of 50% is worth half as much as an individual pain of 50%,
2. an individual pleasure of 50% is worth 1/20 as an individual pain of 90% "
3. an individual pleasure of 10% is worth 1/100 as an individual pain of 90% "
2.1.2. Decision making based on logical argumentation and non-restrictive preferences
Decision-making_wrt_argumentations_and_selected-ethicalities-conform-preferences
.[{1..* ^(Type /^ Attribute_of-and-used-for_a_fair-or-ethical_process) ?ert} ?erts
] //could be removed since this def. relies on ThisKB as global parameter anyway
?dmProcess
attr: an Ethicality_to_be_used_directly __[believer: pm default], //default pref.
agent: 1..* Agent ?a,
consequence:=> (a Process_aimed_to_be_fair-or-ethical ?consequenceProcess
experiencer: {1..* Agent ?e} ?experiencers),
input:=> 1 Goal_situation ?goalS 0..* (^"Prospective_decision" /^ Information_object) ?pd
(a Like-or-dislike-or-preference_description_compatible_with _(?ert) _(each ?e) ?likesDescr
//if not already in ThisKB, each ?e adds to it via ?likesDescr //→ ThisKB: global param.
result of: (a Gathering_of_Like-or-dislike-or-preference_of_agents
_(each ?e, ?pd, ?temporalConstraints) -% ?likesDescr ) )
a Checking_of_the_addition-or-update_of_Like-or-dislike-or-preference_of_agents,
parameter: { a Point_in_time ?hardDeadline, a Duration ?minDurationBeforeHardDeadline,
a Duration ?minDurationAfterLastUnansweredAlternative } ?temporalConstraints,
result: (an Information_object ?finalDecision descr of: ?consequenceProcess,
member of: {?pd, each ^(Information_object descr: (a Process consequence: ?goalS)) } ?alts,
//<=> {?pd, 0..* ^(Information_object descr _[<= ThisKB /*closed-world inferencing*/]:
// (a Process consequence: ?goalS)) } ?alts,
better-than_wrt_argumentations_or_selected-ethicalities-conform-preferences //ThisKB: global param.
_(?erts)
_[believer: maximum of ?e //→ meant to select only the better-than relations (→ decisions)
// that are the most believed; + also applied with the used definitions
]: each ^(Decision != ?d, member of: ?alts)
). //the selected decision is one of the equally best_supported of the alternatives
/* To add:
* preferences not about decision making procedures cannot be justified,
can only be of the form ` `P(X)' preference_according-to_(me): [-Q,+Q]'@me
or, more shortly, ` `P(X)' creator_preference: [-Q,+Q]'@me
(no restriction on Q; Q may be 100%; log scale can be used;
the sum of contradictory preferences value should be Q)
* preferences/belief about directives (e.g. decision making procedures to follow) should
- not allow someone's preferences to be maximized to more than Q
(for each decision) for avoiding people to maximize all their preferences
- be non-contradictory if applied to every body,
and/or be justified wrt increasing global happiness
- by default be applied to everybody (to prevent tyrans/paternalists/...
e.g. "nobody decides for another against his/her will
unless she specified to and this does not cause serious harm"
"X can decide for me on Y aspects
as long as this does not cause me serious harm"
2023-02-20: including oneself seemingly irrationally wrt hurting (future) self
but - suicide is not hurting future self
- unless proves that current gain outweight future self preferences
- 2024-12-11, also in dp_KS.html#1.3.8.2:
"Any modif. in/to a shared space/resource (KB, agenda, ...) should
i. have an history supporting recursive undo, and
ii. be a ^"Process described by a statement about its optimality
which is not a Successfully-contradicted_statement"
*/
2.2. Pre-process: Checks (Workflow, Specs)
2.2.1. Gathering_of_Like-or-dislike-or-preference_of_agents
Gathering_of_Like-or-dislike-or-preference_of_agents
.[1..* Agent ?e, Information_object ?pd,
.{ a Point_in_time ?hardDeadline, a Duration ?minDurationBeforeHardDeadline,
a Duration ?minDurationAfterLastUnansweredAlternative } ?temporalConstraints,
] -% ?likesDescr,
/^ Process_not-aimed-to-be-fair-or-ethical_but_enabling_a_process-aimed-to-be-fair-or-ethical,
:=> "Each agent ?e that may be affected by a planned decision making with object ?pd
should be warned and able to correct+suggest ...",
part: (a Sending_of_notice_to _(each ?e,?pd,?hardDeadline - ?minDurationAfterLastUnanswerdAlternative) ?s)
(a Getting_all_answers_from-of-before _(?e,?pd,?temporalConstraints) -% ?likesDescr
predecessor: ?s //to do: use ?minDurationAfterLastUnansweredAlternative
).
/* End when
iv) * no agent among ?as has used ?warning_minimal_duration
to send ?ag a PP which, with respect to the Agent-s_kbs of the ?as,
does an Agent-s_kbs-majoritarily-invalidation-or-improvement
(note: "majoritarily" can be replaced by another voting rule
if the ?as majoritarily agree), or
* ?ag has been able to do an Agent-s_kbs-invalidation-or_improvement on each
Agent-s_kbs-invalidation-or-improvement that he received and
has given enough time to the ?as to do an Agent-s_kbs-invalidation-or-improvement
any previous (result of) an Agent-s_kbs-invalidation-or-improvement he sent.
iii) if only one agent must perform ?act,
- the competency or willingness of ?ag to do ?act must not have been
invalidated (Agent-s_kbs-majoritarily-invalidation(w.r.t. ?Agent-s_kbs)), and
- there is no agent ?ag2 different from ?ag that would be willing to do ?act
and such that the proposition "?ag2 would be better than ?ag to do ?act"
was validated (Agent-s_kbs-majoritary-validation(w.r.t. ?Agent-s_kbs)), and
//thus, role-related powers/responsabilities can be denied for any ?act
*/
2.2.2. Checking_of_the_addition-or-update_of_Like-or-dislike-or-preference_of_agents
Checking_of_the_addition-or-update_of_Like-or-dislike-or-preference_of_agents /*
* to encourage/enforce commitments and discourage changes for particular goals
(e.g., about death penalty (or other inequal decision) for these (kinds of) persons),
- any change to own befief/fact/preference should be PProoved- and any
restriction should be PProved too
- for the Agent-s_kb to remain consistent, this may previously require other changes
(which can then be exploited for other decisions)
- if a fact/preference/belief is changed more than once a year,
at least one change is logged every year (the one chosen by the author).
For anonymity purposes, pseudos can be used for Agent-s_kb author, as for vote authors
in some electroning voting systems. */
2.2.3. For Information Analysis/Synthesis
Repr/org/comp/eval/aggregating "best"/Kn./person/criteria:
2.2.4. For Tasks/Answering To Be Performed Timely, Fully And According to Specifications
2.2.4.1. For Questions to Be Timely And Fully Answered
Example of use of phmartin3@webkb.org :
The e-mail that is included below, addressed to
phmartin3@webkb.org, from now referred to as "TheRecipient",
has not been delivered to TheRecipientfor the following reason:
the source of this email, yy@yyy.yyy, from now referred to as "TheSource" --
(you / your co-workers/aliases) //pm: one of the recipients of TheRecipient's email
has not answered TheRecipient's e-mail of xx/xx/xx xx:xx (Subject: xxxx)
via the tool at https://www.webkb.org/email?xx=xxx&yy=yyy
as TheRecipient asked TheSource to do, for the following reasons:
1) organizing the contents from TheSource's e-mails with TheRecipient to make this
content easily retrievable,
2) ensuring that TheSource answers all the questions that TheRecipient asked TheSource
for TheRecipient to be able to proceed further.
In lieu of a full answer, TheSource may of course provide a reason why a full
answer cannot be provided, but a reason needs to be provided, if only for
fairness purposes and for enabling TheRecipient to know and take this reason into
consideration for proceeding further.
If TheSource wishes to, the answers -- and the questions they answer to --
can be automatically e-mailed to TheSource by the above cited tool .
Hence, TheSource is again invited to use https://www.webkb.org/email?xx=xxx&yy=yyy
The attempt of TheSource not to use this medium has been stored on TheSource's profile at ...
This message has been automatically sent xx seconds after TheSource sent its
above-cited (and included below) e-mail undelivered to TheSource.
2.3. Liberty/Equity/Optimality Uncompatible/Compatible/Justified State or Process
2.3.1. State-or-process_wrt_no_more-than-minimal_liberty-restriction
State-or-process_wrt_
liberty-restriction
= Ethicality-or-fairness_related_state-or-process_wrt_attr //
Section 2.0.2
_(Ethicality_wrt_no_more-than-minimal_liberty-restriction),
\. p{ (
Unethical-or-unfair_state-or-process_wrt_no_more-than-minimal_liberty-restriction
= Unethical-or-unfair_state-or-process_wrt_attr
_(Ethicality_wrt_no_more-than-minimal_liberty-restriction),
\. p{ ^"Unethical-or-unfair_state-or-process_wrt no_more-than-minimal_liberty-restriction
but neither equity nor optimality_of_satisfaction_repartition"
(Unethical-or-unfair_state-or-process_wrt_equity_and_no_more-than-minimal_liberty-restriction
\. e{ (^"Unethical-or-unfair_state-or-process_wrt no_more-than-minimal_liberty-restriction
and equity but not optimality_of_satisfaction_repartition"
/^ Unethical-or-unfair_state-or-process_wrt_attr
_(Equity_for_a_fair-or-ethical_state-or-process),
!/^ Unethical-or-unfair_state-or-process_wrt_attr
_(Optimality_of_satisfaction_repartition) )
(^"Unethical-or-unfair_state-or-process_wrt no_more-than-minimal_liberty-restriction and
not equity but optimality_of_satisfaction_repartition"
!/^ Unethical-or-unfair_state-or-process_wrt_attr
_(Equity_for_a_fair-or-ethical_state-or-process),
/^ Unethical-or-unfair_state-or-process_wrt_attr
_(Optimality_of_satisfaction_repartition) )
(^"Unethical-or-unfair_state-or-process_wrt no_more-than-minimal_liberty-restriction
and equity and optimality_of_satisfaction_repartition" ) //below
} )
} )
(
State-or-process_compatible_with_no_more-than-minimal_liberty-restriction
= Ethicality_compatible_state-or-process_wrt_attr
_(Ethicality_wrt_no_more-than-minimal_liberty-restriction),
\. p{ Fair-or-ethical_state-or-process_wrt_no_more-than-minimal_liberty-restriction //defined in 2.0.2
State-or-process_compatible-with-but-not-satisfying_no_more-than-minimal_liberty-restriction
}
e{ ^"State-or-process_
compatible-with no_more-than-minimal_
liberty-restriction
but neither equity nor optimality_of_satisfaction_repartition"
(^"State-or-process_
compatible-with no_more-than-minimal_
liberty-restriction
and equity
but not optimality_of_satisfaction_repartition"
/^ State-or-process_compatible_with_equity )
(^"State-or-process_
compatible-with no_more-than-minimal_
liberty-restriction
and equity and optimality_of_satisfaction_repartition"
/^ State-or-process_compatible_with_equity
State-or-process_compatible_with_optimality_of_satisfaction_repartition )
} )
}.
/* @@@ above declared, not yet used:
^"
Unethical-or-unfair_state-or-process_
wrt no_more-than-minimal_
liberty-restriction
but neither equity nor optimality_of_satisfaction_repartition"
^"
Unethical-or-unfair_state-or-process_
wrt no_more-than-minimal_
liberty-restriction
and equity but not optimality_of_satisfaction_repartition"
/^ Unethical-or-unfair_state-or-process_wrt_attr _(Equity_for_a_fair-or-ethical_state-or-process),
!/^ Unethical-or-unfair_state-or-process_wrt_attr _(Optimality_of_satisfaction_repartition)
*/
^"
Unethical-or-unfair_state-or-process_
wrt no_more-than-minimal_
liberty-restriction
and equity and optimality_of_satisfaction_repartition" //above declared
/^ Unethical-or-unfair_state-or-process_wrt_attr _(Equity_for_a_fair-or-ethical_state-or-process)
Unethical-or-unfair_state-or-process_wrt_attr _(Optimality_of_satisfaction_repartition),
\. (
Heathcare_agent_deciding_for_patients_against_their_wishes
\. (Heathcare_agent_hiding_information_from_patients_against_their_wishes
annotation: "e.g., option changing secret mode for heathcare staff in medical home devices" ) )
\. (^"Not doing something or not doing it properly,
when that (is allowing/encouraging at least one action that is) resulting/contributing/participating
in a loss of Liberty, Equity and Optimality_of_satisfaction_repartition purposes, and
when being duty-bound to do it (e.g. due to a commitment) or
when doing so would cost you less disatisfaction i. than how much satisfaction it would bring to others,
and ii. than it would for others to do this
(note: if 'making the satisfaction happen' or 'making the disatisfaction not happen' is unethical wrt
no_more-than-minimal_liberty-restriction, Equity and Optimality_of_satisfaction_repartition',
the satisfaction/disatisfaction
does not count, e.g. i. a satisfaction from seing other
persons/animals suffer, and ii. a disatisfaction from not being praised enough)"
//all this is indirectly and formally represented in Ethical_agent_wrt, in a surprisingly simpler way:
// no apparent need for (di-)satisfaction /^ State_that_is_unethical-or-unfair_wrt_attr
\. n{ Not_allowing_something_globally_positive_because_this_does_not_lead_to_an_unethical_satisfaction
^"Not denouncing-or-preventing an 'Unethical-or-unfair_state-or-process_wrt
no_more-than-minimal_liberty-restriction and equity and
optimality_of_satisfaction_repartition"
Missing_or_being-late-to_an_appointment_without_warning_while_being_able_to_warn
(^"Not providing_a_correct_or_precise-enough_information for someone to make an informed choice
while being duty-bound to provide such information or being able to provide it at low cost"
\. Lying
^"Not answering a question in a way that would allow someone to make an informed choice
while being duty-bound to provide such information or being able to provide it at low cost"
^"Not properly reading a document or checking one's assumptions before evaluating, commenting
or answering to this documeent (e.g. an e-mail)"
^"When answering an e-mail, globally answering to several questions instead of using
quote+reply symbols to quote each question and answer right below it, thus avoiding
ambiguities and losses of context for the readers"
)
} //the next type is not here since it is wrt ethical attribute and hence declared in Section 2.0.2:
//
Not_trying_to_prevent_an_unethical-or-unfair_state-or-process_wrt_attr
)
(Deciding_for_others_even_when_they_can_decide_for_themselves
\. Deciding_what_is_best_for_others_even_when_they_can_decide_for_themselves )
Deciding_what_is_best_for_oneself_although_this_is_globally_not_optimal .
(Doing_something_unequitable_against_some_persons_with_more-than-minimal_liberty-restriction
\. Unequitable_positive-or-negative-discrimination_with_more-than-minimal_liberty-restriction
(^"Doing_something_unequitable_for_some_persons_at_the_expense_of_other_persons
with_more-than-minimal_liberty-restriction
\. Unequitably_protecting_the_ones_you_love_at_the_expense_of_other_persons
(Doing_something_unequitable_to_survive
\. The1OO#Doing_what_is_needed_to_survive )
//and "then working about getting one's humanity back"
(
Doing_something_that_is_globally_costly //@
\. Using_pesticide_for_growing_food
__[ <~~ "Pesticide kill and pollute, and this is cause more harm than satisfaction",
<~~ ""
] )
(
^"Doing something even though there are less costly methods"
\. Using_pesticide_for_growing_food
__[ <~~ ("Pesticides are the only method for growing sufficiend food"#bad
!<~~ (Fait_A Fait_B (Fait_C#good
!<-- (Proof_B#bad <-- (Fait_D#bad !<-- Fait_E#good )) ) )
<~~ ""
] )
) )
.
//these next 3 top types are declared below:
^"
Unethical-or-unfair_state-or-process_
wrt equity
but neither no_more-than-minimal_liberty-restriction nor optimality_of_satisfaction_repartition"
\. (^"
Using-without-inciting-to-buy a smartphone during a conversation
for a purpose unrelated to this conversation" //@
!<~~ ("It is not possible to use without inciting" !<-- "") )
Ordering_without_justifying .
^"
Unethical-or-unfair_state-or-process_
wrt equityx
and optimality_of_satisfaction_repartition
but not no_more-than-minimal_liberty-restriction"
\. Discrimination_with_no_more-than-minimal_liberty-restriction
(Being_negative_when_talking
\. Insisting_that_some_inevitable_event_will_turn_out_bad //whether actually knowing that or not
Frequently_complaining_about_something_that_cannot_be_changed_by_the_complaint_recipient ).
^"
Unethical-or-unfair_state-or-process_
wrt optimality_of_satisfaction_repartition
but neither equity nor no_more-than-minimal_liberty-restriction"
\. (Changing_the_intended_order_of_elements_in_a_list_without_warning_the_author_of_the_list
\. Gmail_sorting_attached_documents_wrt_the_alphabetic_order_of_their_names_after_e-mail_sending ).
^"State-or-process_
compatible-with no_more-than-minimal_
liberty-restriction //declared above
and equity and optimality_of_satisfaction_repartition" //to be defined below
\. ^"
Using-without-inciting-to-buy a smartphone during a conversation for
quickly and
efficiently checking facts related to this conversation" . //@
"
2.3.2. State-or-process_wrt_Equity
State-or-process_wrt_equity
= Ethicality-or-fairness_related_state-or-process_wrt_attr _(Equity_for_a_fair-or-ethical_state-or-process),
\. p{ (Unethical-or-unfair_state-or-process_wrt_equity
= Unethical-or-unfair_state-or-process_wrt_attr _(Equity_for_a_fair-or-ethical_state-or-process),
\. e{ (Unethical-or-unfair_state-or-process_wrt_equity_but_not_optimality_of_satisfaction_repartition
\. e{ ^"Unethical-or-unfair_state-or-process_wrt equity but neither
no_more-than-minimal_liberty-restriction nor optimality_of_satisfaction_repartition"
^"Unethical-or-unfair_state-or-process_wrt
no_more-than-minimal_liberty-restriction
and equity but not optimality_of_satisfaction_repartition"
} )
(Unethical-or-unfair_state-or-process_wrt_equity_and_optimality_of_satisfaction_repartition
/^ Unethical-or-unfair_state-or-process_wrt_attr _(Optimality_of_satisfaction_repartition),
\. e{ ^"Unethical-or-unfair_state-or-process_wrt equity
and optimality_of_satisfaction_repartition
but not no_more-than-minimal_liberty-restriction"
^"Unethical-or-unfair_state-or-process_wrt no_more-than-minimal_liberty-restriction
and equity and optimality_of_satisfaction_repartition"
} )
} )
(State-or-process_compatible_with_equity
= Ethicality-or-fairness_compatible_state-or-process_wrt_attr
_(Equity_for_a_fair-or-ethical_state-or-process),
\. p{ Fair-or-ethical_state-or-process_wrt_equity //defined in 2.0.2
State-or-process_compatible-with-but-not-satisfying_equity
}
e{ State-or-process_compatible_with_equity_but_not_optimality_of_satisfaction_repartition
(State-or-process_compatible_with_equity_and_optimality_of_satisfaction_repartition
\. ^"Deciding not to perform an act that would cost its agents more than
what it would bring to others in some statistical sense (average/median/maximum/...)"
)
} )
}.
2.3.3. State-or-process_wrt_optimality_of_satisfaction_repartition
State-or-process_wrt_optimality_of_satisfaction_repartition
= Ethicality-or-fairness_related_state-or-process_wrt_attr _(Optimality_of_satisfaction_repartition),
\. p{ (Unethical-or-unfair_state-or-process_wrt_optimality_of_satisfaction_repartition
= Unethical-or-unfair_state-or-process_wrt_attr _(Optimality_of_satisfaction_repartition),
\. e{ ^"Unethical-or-unfair_state-or-process_wrt optimality_of_satisfaction_repartition
but neither equity nor no_more-than-minimal_liberty-restriction"
^"Unethical-or-unfair_state-or-process_wrt equity
and optimality_of_satisfaction_repartition
but not no_more-than-minimal_liberty-restriction"
} )
(State-or-process_compatible_with_optimality_of_satisfaction_repartition
= Ethicality-or-fairness_compatible_state-or-process_wrt_attr _(Optimality_of_satisfaction_repartition),
\. p{ Fair-or-ethical_state-or-process_wrt_optimality_of_satisfaction_repartition //defined in 2.0.2
State-or-process_compatible-with-but-not-satisfying_optimality_of_satisfaction_repartition
} )
}.
2.4. Specializations, e.g. Repartitions (Work Allocation, ...)
2.4.1. For Work Allocation
Encouraging/enforcing the following of coop/"best" rules (fairness/equality/effortminim/secu.):
3. Axioms (Default Beliefs/Preferences, ...) for Cooperation Or Fair/Ethical Acts
and Their Justifications
3.1.
Ethical_criterion = Attribute_used_as_a_parameter_for_calculating_an_ethicality,
= ^"Criterion that statistically|majoritarily directly/indirectly benefit/contribute
(and hence, in a sense, specialize) the Optimality_of_satisfaction_repartition criteria)".
__[<=> "Any Ethical_criterion has to statistically directly/indirectly optimally benefit
the ethical satisfaction of all agents able to experience (di-)satisfaction
(→ no arbitrary preference -> no dictatorship)"
=> "Any ethical action has to be an optimum (hence be logic too) wrt. available resources and
the ethical criteria/preferences of the action's recipients" ],
\. Optimality_of_satisfaction_repartition
(Equity_of_satisfaction_repartition
= ^"Criterion referring to the optimal and equal affectation of available resources wrt. ethical criteria"
) // also an universal/inescapable one: even voluntary martyrs do not escape it since their
// satisfaction come from other's people satisfaction
(No_more-than-minimal_liberty-restriction
= ^"Criterion referring to the non-restriction of a (di-)satisfaction experiencing agent
in any way other than in a minimal way:
- minimal in time, e.g., just for warning them of a consequence that they appear not to be aware, or
during a short term impossibility to take rational decisions, and
- minimal in method, e.g., minimal physical restriction to prevent an unethical action"
),
! \. Love_based_criteria //@
__[.<= "Love-without-empathy does not necessarily lead to Optimality_of_satisfaction_repartition"
"Love-without-empathy does not necessarily lead to No_more-than-minimal_liberty-restriction"
"Love-without-empathy does not necessarily - nor even mostly - lead to Equity",
.=> "No love-based religion is ethical" ],
! \. Regret-experiencing_based_criteria,
! \. Survival_rate
__[.<= "Sacrifices for survival purposes do not necessarily - nor even mostly - lead to
Optimality_of_satisfaction_repartition (except when the victims consent and the
sacrifices are not too painful)",
.=> "The most frequently invoqued rationale in 'The 100' is not ethical" ],
! \. Natural_law/selection_rate
__[.<= "Natural law/selection does not necessarily - nor even mostly - lead to
Optimality_of_satisfaction_repartition" ].
"When choosing between decisions, an ethical decision-making system should consider the
previous ethical-or-unethical actions of the recipients of the decisions"
\. "When choosing between decisions by trying to find a global optimum for the recipients of the decisions
according to their preferred ethical criteria, for each agent that has commited an action not
respecting these criteria thus leading to a disatisfaction of ?x units, an ethical decision-making
system can deduct this value from the individual-optimum-to-reach for this agent".
//About inadequacy of direct optimization (hence pure Utilitarianism, voting, ...):
"Any Decision-making_process_aimed_to_be_fair-or-ethical that works on
directly optimizing people satisfaction (instead of optimizing wrt their criteria of statisfaction)
cannot be instance of {Decision-making_of_action_with_no_more-than-minimal_liberty-restriction
Fair-or-ethical_decision-making_process_wrt_equity,
Fair-or-ethical_decision-making_process_wrt_optimality_of_satisfaction_repartition}"
=> "Act-utilitarianism_ethicality is not subtype of
{Action_ethicality_wrt_no_more-than-minimal_liberty-restriction,
Action_ethicality_wrt_equity, Action_ethicality_wrt_optimality_of_satisfaction_repartition}"
"a Cooperative_system based on voting is inadequate to support any
{Decision-making_of_action_with_no_more-than-minimal_liberty-restriction
Fair-or-ethical_decision-making_process_wrt_equity,
Fair-or-ethical_decision-making_process_wrt_otimality_of_satisfaction_repartition}",
<= ("A vote may not be cast for reasons incompatible with no_more-than-minimal_liberty-restriction
or equity or optimality_of_satisfaction_repartition"
<= "a voter may be illogical or incompetent (not knowledgable, biased, ...) about the voted question"
"a voter may be unconcerned by the voted question (and hence may cast a vote in a random way)"
"a voter may be dishonest or may have unethical reasons to vote in a certain way (vanity, sadism, ...)"
)
"Even if all the votes are ethical and well-founded, if the criteria and assumptions that a vote is based on
are not associated to this vote, the aggregation function cannot exploit these criteria for complying to
i) default constraints/rules for aggregating the votes in an ethically meaningful way, or
ii) possible preferences of the voters for aggregating the votes".
"An agent that is ethical (wrt. at least Equity and Optimality) should do
every act that is logical/ethical (e.g., equitable) if (for example)
- the agent has committed to it (e.g. if that is the agent's job to do it), or
- the agent is the "most relevant" to do it, or
- the agent does not have a better use of is current/futur resources (e.g. if the agent will soon die)". //@
"In decision making systems that can be supported by software,
the less vertical structure there are, i.e.,
the less some agents have a right to decide according to their own preferences
instead of having to follow the results of a commonly agreed decision making function
over preferences of all people affected by the decision
wrt commmonly agreed criteria for this decision,
the better wrt at least efficiency and optimality in achieving the commonly agreed criteria
(hence, probably, at least the following ethical criteria: no_more-than-minimal_liberty-restriction,
equity and optimality_of_satisfaction_repartition)
(notes / pre-conditions :
* the preferences of the people affected by the decision need to record their preferences
(and be supported to do that) for the software to take them into account;
* the software needs to be more efficient than a person to perform the decision making function
and, if necessary, act upon it (this is not a problem nowadays, and this implies that a
human general does NOT need to have any more decision making power that a common soldier,
even when very quick decisions need to be made);
* the software needs to satisfy the usual transparency and security constraints for the
agents affected by the decision to make sure that the commonly agreed the decision making function
was applied;
* some agents still need to be responsible for
- managing the cooperative decision-making process according to its rules,
- officially recording the decision,
- make sure it is enforced;
* the decision enforcing agents require to have enforcing power other enforced agents"
=> "In decision making systems that can be supported by software,
any vertical structure is un-necessary and bad wrt the pursued criteria", //power is always abused
<= [ ... ]
"An agreement that does not include enforceable guarantees that the agreement will be followed by the
agants that have signed it, may be worse than a non-agreement"
@@@ goals:
* prove that default preferences for cooperation/decision-making/ethical system
- are best for any (dis-)satisfaction-experiencer (even submissive masochists and meritocrary fans),
hence universal,
- have to be universal/scalable (dictatorship cannot be optimal)
in 0.1: Like-or-dislike-or-preference_description_compatible_with
- prove that adhering to default preference implies following
Decision-making_wrt_argumentations_and_selected-ethicalities-conform-preferences and
Decision-making_based_on_preferences_and_using_weights_to_avoid_tactical-voting_and_majority-dictatorship
//in Section 1.2:
Optimality_or_efficiency_or_slight-generalization-of-it_for_a_fair-or-ethical_process
/^ Ethicality_to_be_used_wrt_attribute __[believer: pm default]
"by definition, a 'best' cooperation/decision-making/ethical system
has to ensure that Optimality_of_satisfaction_repartition is satisfied"
https://www.atelier-collaboratif.com/
plus complet: https://www.atelier-collaboratif.com/toutes-les-pratiques.php
"For cooperation, the more information sharing there are and the better it is, the better."
Scalable_cooperative_process .[1..* Agent ?ag] ?p
/^ Globally-positive-situation_according-to _(?ag) !
:= [ [?p part: an Information-sharing_related_process ?ip]
=> [?ip type: Scalable_information-sharing_related_process] ],
part=> only 0..* ^(Process /^ Globally_positive_according-to_most !
Pos. criteria:
coop/W \. OKS Communication
time: agent: instr:
- time of agents: -
- effort of agents: -
- instr
@@@
Globally-positive-situation_according-to
\. (Fair-or-ethical_process \. ^(Reporting object: an Inethical_act) Punishing_an_inethical_act).
mail
(0. cc public@intellectual-decency-checker-newsgroup.org for it to
store with relevant permissions (possibility for sender to remove?)
no since Usenet (i.e., newsgroups) is not free and retains only for a few days/months,
many have a lot of binaries )
1. JS script mailer deamon on your machine which does
GET last email content + (JS of) last structured additive wiki of exchanges
(and sends it to JA at http://www.intellectual-decency-checker-newsgroup.org which)
- sendsBack/additiveUpdate a new additive wiki on some server,
- email the logical errors + missing info in last email(s)
Usenet (i.e., newsgroup) is not free and retains only for a few days/months, has a lot of binaries
KB = blacboard =* //, no fictional
better = either truer or preferentialer ; non sequitur ; avantage Mr X
bad coop: https://fr.quora.com/Quelle-est-votre-r%C3%A8gle-dor-dans-la-vie
https://fr.quora.com/Si-je-repr%C3%A9sente-50-du-b%C3%A9b%C3%A9-pourquoi-nai-je-pas-mon-mot-%C3%A0-dire-sur-lavortement
globally_better .[criteria, agregFct] e.g. ++(concision:less lines) -().
=> Maximizing this Logical and Ethical Aggregation and its Associated Aggregation of the Utilities of ?e
Take into account all relevant info (preferences, decision criteria, resources) for the decision : good
=> Collect all relevant info at least from people affected by the decision : good
(agent: ?decider / ?system) (method: transparently with allowed additions/corrections,
-> KBS, at least a wiki, !just emails since bad and as long)
=> Give at least 2 weeks notice and 48n after each bettering alternative suggestion: good
ex h4 id="Decision_def_Evaluation" 1.2.1. Evaluation of Fairness: "decision making processes can be partially
ordered - or, more arbitrarily, fully ordered - wrt fairness-related_attributes
they have (or their value for these attributes, and the subtype relations
between the types of these attributes; in a nutshell, more is better)"
For Decision-making (Evaluating, ...) To Be "Fair", i.e.
- Consistent With a Logical and Ethical Aggregation of the
Logically-consistent Preferences and Knowledge (Proofs,
Observations, Beliefs) Of+About Each Agent Affected By the
Decision and Each Usable Resource, and
- Maximizing this Logical and Ethical Aggregation and its
Associated Aggregation of the Utilities (Advantages and
Disadvantages) of these Agents;
⇒ ODR-NG "For fair decision making, unfair actions should be
forbidden, hence prevented+retributed (idem for
unfair inactions inconsistent with commitments)"
decision : choice of the best alt.
old: 4.0.1. experience/preference/rule, pos/neg, pref/maximum for it
4.0.3. aggreg/decision fct/methods goals (total/avg/min/...)/attr/subtypes
1.0. fair_process input: fact+prefer about the content,
pref about the decision process/result (hence the 1.2)
output: decision,
method: how+what is maximized (exper/pref/rule, pos/neg, state/pref attr,
total/avg/min/... : aggreg fct )
1.2. what (input/param about the output_state) to maximize/prefer the decision process:
- what is a (pref/maximum of) exper/pref/rule
- what is pos/neg
- what are the result state attr
- what are the aggreg/decision fct/methods goals (total/avg/min/...), attr, subtypes
http://www.webkb.org/kb/nit/ethicalDecisionMaking/
~/public_html/WebKB2/kb/it/o_KR/p_cooperation/CecileAnnotations.odt
To: fontaine.val@gmail.com, subject: "question et morale", date: Jan 11, 2021, 1:24 AM
A moral may be represented by a function which,
given a set of actions, a set of persons (that would experience the consequences of the actions),
a way to know or estimate what individually (dis-)satisfies each of these persons, and
if available, a way to estimate which of the persons did what (and, if possible, to whom)
in terms of potentially moral/immoral actions,
returns the subset of actions (from the given set of actions) that
equivalently maximize the past+expected global/collective satisfaction of the persons, given that
* dissatisfaction may be represented as a negative satisfaction;
* to avoid "majority dictatorship":
- if an action causes an hardly avoidable great dissatisfaction (pain at least
equivalent to what 2nd degree burns or drowning feels like for most persons),
this dissatisfaction is given an infinite weight,
- if an action causes the death of a person, this dissatisfaction is given a weight
equal to what this person evaluates the cost of "any" painless death in terms of physical pain
(and then, the previous rule may also apply),
- ... other more mathematical rules (to refine);
* if available, the knowledge of who did what (to whom) should be taken into account by
the above cited maximisation to try to counterbalance immoral actions, for their authors
and their recipients.
Note: the main problem is to (e)valuate what (dis-)satisfies whom and then to aggregate these values
(aggregations are generally arbitrary as soon as there is more than one criteria, and here
they may be dozens or hundreds); however, this is not a big problem for people since
they (can) only do extremely rough valuations and aggregations.
/* pas de decision sans fournir aux personnes concernees
non seulement l'opportunite' de prouver qu'une meilleure solution existe,
mais aussi une contre-preuve pour toute pseudo-meilleure-solution fournie.
possible Me'thode de choix : //e-mail du 17 mars
En cas de divergences sur des choix qui ne rele`vent que de pre'fe'rences,
je voterai pour la pre'fe'rence majoritaire (parmi les pre'fe'rences exprime'es
sur la mailing list cite'e plus haut), a` main leve'e lorsque possible.
En cas de divergence sur des choix qui ne rele`vent pas que de pre'fe'rences,
je voterai pour l'opinion majoritaire parmi celles dont les contre-arguments
n'ont pas e'te' eux-me^me logiquement contre-argumente's. Eh oui, transparence,
cohe'rence et quelques efforts (sauf pour ceux qui s'abstiennent) sont requis
pour diminuer les proble`mes associe's a` toute me'thode de vote et, en
particulier, les votes "a` la majorite'". Cependant, si vous choisissez
- via ce pre'ce'dent syste`me de choix - un autre syste`me de choix,
j'utiliserai ce dernier.
*/
Each (counter-)choice/arg must increase the Agent-s_kb criteria, not decrease them.
Proportionate response/restriction: no hand-cutting for bread stealing, no "for the example"!
ethical => logical+maxTruth, maximize some ethical attributes,
+ cannot be against other's pref unless paternalistic
ODR-NG "For Fair decision making, actions that are not fair
(i.e. that decreases the global utility) must be forbidden
(inactions that increases the global utility shoud be mandatory)"
3.2. Ethical Aggregation Rules
3.2.2. For Fair Evaluations
In evaluations, some points should not be under-evaluated
\. In evaluations, some points should not depend on others
3.2.3. General Informal Rules
In (linear) explanations, always begin by the conclusion
Given their role, policy setters should set policis that forbid then to be judge and jury.
people should follow Rationality obligation information obligation
someone cannot use a rule (for justif) if he disagree with
if he violated it an has not paid his debt
should not be judged as severely for violating a rule he disagree with
as for violating a rule he agree with
tools must have the "you have not corrected everything" feature as defaut and,
if disabled, this should be associated to (hence accessible from) things made with these tools
Albert Hirschman, a consacre' a` cette question un livre de re'fe'rence intitule'
Exit, Voice et Loyalty. Il en concluait, (un peu comme l'ont fait ce 28 juin les
enseignants-chercheurs e'lus au CA), que faire Exit, se retirer du jeu, est le seul
moyen de pression qui reste quand la prise de parole n'est jamais entendue en
re'gime autocratique ou d'emprise.
Pareto efficiency. The result is that the world is better off in an absolute sense and no one is worse off.
logical tolerance_to_bad_thing /^ bad_thing
physical_pain/frustration/preference/right > mental_pain/frustration/preference/right (ego)
bruit > parole/apparence
https://en.wikipedia.org/wiki/Liberal_paradox when "nosy" preferences = neg externalities
preferences only about oneself + "individual rights"(|"Minimal liberalism"?=!dictatorship)
+ for each strong pref a weak one for the exclusions //this is my originality
"The ultimate guarantee for individual liberty may rest not on rules for social choice but on developing
individual values that respect each other's personal choices."[1] Doing so would amount to limiting particular
types of nosy preferences, ... or reciprocally accepted nosy contracts/preferences!!
//EQUALITY //proportionality punitive-equality fairness
pain+pleasure equality: pref that is agreed/shared + consistent/ParetoEfficiency, hence ethical/right
(Ethical_decision = desirable by all/most !=only Pareto-efficient: !distributional equity
\. (Ethical_decision_wrt_equality
\. (Ethical_decision_wrt_equality_wrt_every_consistent_preference
/^ "Decision that takes into account every consistent preference,
positive ones (rights, pleasure, freedom, ...),
negative ones (...) //for pain-conscious agents"
) )
) //non-utilitarian criterion: rights (SEE BELOW), property, need
Commonly-viewed-as-fair_reaction_after_some_wrong-doing
//utilitarianist goal: Deterrence Rehabilitation Security/Incapacitation
//other common goal in legal theory: Retribution Reparation Denunciation
voluntary (non-coerced) transactions always have a property called Pareto efficiency. The result is that the
world is better off in an absolute sense and no one is worse off.
Default_preference:
\. Envy-freeness
https://www.researchgate.net/publication/225221212_On_Dividing_Justly
agents want positive attribute,
more good thing up to a point, less bad things, diminishing marginal utility
exhaust the bundle/resource exactly distribution mechanism
- differences in needs, tastes (or capacity to enjoy various goods) and beliefs //truthfully
- differences in endowments;
- prior-claims: (v) differences in effort, in productivity, or in contribution;
(iv) differences in rights or in legitimate claims.
//TRANSPARENCY to CONTROL that decision respect RIGHTS
RIGHTS are AGREED+LOGIC(!arbitrary) PREFERENCES satisfying EQUALITY
https://en.wikipedia.org/wiki/Natural_justice
Rule against bias Right to fair hearing + legal representation
decision and reasons for decision
transparency of the processes by which decisions are made
One logically cannot (and hence should not) disapprove something coherently based on his own preferences and
beliefs
Preferences_and_decision_rule either imposed or directly/indirecty agreed on by
aggregation & consequentialism total or average welfare caused
social contract tradition state that justice is derived
from the mutual agreement of everyone concernedindividuals have consented, either explicitly or tacitly,
to surrender some of their freedoms and submit to the authority of the ruler (or to the decision of a majority)
in exchange for protection of their remaining rights. John Locke and Jean-Jacques Rousseau argued that we gain
civil rights in return for accepting the obligation to respect and defend the rights of others, giving up some
freedoms to do so. Consent of the governed via elections will theory of contract,
our desire to retaliate against those who hurt us, or the feeling of self-defense and our ability to put
ourselves imaginatively in another's place, sympathy.
excludes selfish bias.
Harsanyi adds two caveats. 1. People sometimes have irrational preferences. To deal with this,
Harsanyi distinguishes between "manifest" preferences and "true" preferences.
2. antisocial preferences, such as sadism/envy/resentment have to be excluded.
Harsanyi achieves this by claiming that such preferences partially
exclude those people from the moral community
Trust? Performance Job satisfaction and organizational commitment
fairness and the transparency of the processes by which decisions are made
the balancing approach to procedural fairness might in some circumstances be prepared to tolerate or accept
false positive verdicts in order to avoid unwanted costs (political)
The participation model
Particular types of private actors (especially professional associations, unions, hospitals, and
insurance companies), due to their overwhelming economic power within particular fields, cannot
arbitrarily expel members or employees or deny persons admission for no logical reason;
they are obligated to provide a rudimentary form of procedural due process (in the form of notice and a hearing)
https://ethelo.com/blog/whats-fair-in-group-decision-making-and-how-do-you-achieve-it/
fairness: when all parties involved feel roughly the same sense of satisfaction about it.
Even the most fair-minded among us are still subject to bias, civil conversation can easily descend into
unproductive arguments if left emotions enter the picture and are left unchecked.
Even the most altruistic people face the temptation to put self-interest above the common good.
discuss/comment&voteOnGivenQuestions
https://park.ethelo.net/projects/5383ff1b2e6db8f6aa000027/vote/project-user/background
https://www.rand.org/randeurope/research/projects/understanding-how-organisations-ensure-fair-decisionmaking.html
- little agreement on what constitutes 'fair decision making'.
- strong leadership and demonstration of emotional intelligence; an open and transparent organisational culture;
and a clearly defined organisational structure.
Collective Decision Making http://www.unicaen.fr/recherche/mrsh/PACD2019 click on Decision Making 2019.pdf
changes in preferences must be justified/proven if gain
//see optimal voting, contract, preferences-based!arbitrer //utility vs. natural_rights
//Entity_fairness_criteria) //calculated wrt processes
projet de Loi "Pour une Republique numerique" +serment Mail/k/coop/15
"A fair decision-making process should be consistent with a logical and ethical
aggregation of the logically-consistent knowledge (preferences, beliefs, ...) of+about
each agent affected by the decision"
_<=> AND_set
{ ("A fair decision-making process should be such that the agent appointed to be the
decision maker should not matter as long as this agent has the listed technical
skills (not the human-related skills) and has access to the relevant tools to make
the decision"
<=> "A fair decision-making process should not depend on emotions - nor personal
preferences - of the decision maker",
<=_ "A fair decision-making process should be rational" )
("A fair decision-making process should be ethical and hence take into account
the logically-consistent knowledge (preferences, beliefs, ...) of+about each agent
affected by the decision" )
}.
NO: anonymous authorised vote
without the possibility of permanent control/change of one's vote
- you associate (e.g., within a doc.)
an encryption of your vote with the public voting key
with a signature (with your private key)
- you send the association to a voting clerk (who has not the private voting key)
(pb: anyone can send anything)
- the voting clerk sends your voting crypted vote to the voting official
(pb: the voting clerk could send anything)
- the voting official decrypts all the votes and publishes them
(pb: the voting clerk could publish anything)
OK: anonymous authorised vote
with the possibility of permanent control/change of one's vote
- to every registered/checked member of an org,
the voting server sends a same org_pseudoCreationPublicKey
(the electronic equivalent of a blackbox filled with a (more than enough) list
of voting pseudos)
- using it, every member anonymously registers a pseudo on the server
(infinite list of voting pseudos or rules for unique pseudo creation)
(pb: what to do when there are more/less registered pseudos than org members ?)
(if possibility to ask for another org_pseudoCreationPublicKey or to keep old pseudos,
how to check that only allowed persons vote and only once ?)
- the pseudos (re-)vote publically
IN: my_IDinOrg, my_publicKey,my_privateKey, my_pseudo
org_publicKey,org_privateKey, org_listOfAcceptedPseudos(orgSigned),
org_votingProcessID
OUT: my_pseudo in listOfAcceptedPseudos
when decoded with org_publicKey, it gives org_votingProcessID
-> encoded with org_privateKey -> known my org ->
??: each member gives org 1 pseudo encrypted with org_votingProcessSemiPublicKey
a my_org_votingProcessPrivateKey
I encode with my_org_votingProcessPublicKey the org_votingProcessID
(org can decode with its org_but cannot know that I encoded ?)
Ce'cile's episode
apd Thesie "perte poste/repa as soon as violate optimum" bef: suivre proc
apd Emmanuel: "exist case where ok" does !contradict "exist case where !ok"
coop behavior suspension by revenge
uncoop when contract accepted: judge/ask why /discuss/refuse order that have !influence on self
optimal/scalable better for all/commitment
tr
if !prefKB, relevancy has to be derived by action based on informal communications
group oral feedback: mensonge/filigrane, superficial/incomplete, who knows what, !anonymous
negative...
\. (midly-to-severely_dishonnest
\. legally_dishonnest
(dishonnest_by_uncooperativity
\. (not_having_set_an_environnement_preventing_forgetting_to_answer
\. not_answering
) __[<= ...]
)
)
peu importe comment arrive', si job bien fait pour e'val. objective
droit d'e'lectriser si autre d'accord et si envoyeur a plus (en rapport d'age) que receveur
Capteur/symptom interpretation pb ; Spock ; emotion/logics are important but insufficient
relativisation ego adHominem !insult
no involvement/denunciation = complice
https://mimove.inria.fr/members/valerie-issarny/
https://www.eventbrite.com/e/seminaire-le-numerique-au-service-de-la-democratie-tickets-92713830563
"logicracy" the term has already been used.
but no formal or semi-formal approach, nor direct democ
(-> people meritocracy/elitocracy according "merits" near Oliggarchy)
* http://www.onlineopinion.com.au/view.asp?article=19909 Manolopoulos M. (2018)
democ: "the people" leading "the people" principle of one person, one vote
governance by those who think the best, i.e., by the most thoughtful thinkers
"Logicracy Party."
* https://www.linkedin.com/pulse/can-we-rise-above-democracy-pallavi-vachaspati/
voting rights proportionate to the contributions of each individual citizen,
Financial, Intellectual, Ethical
changer les re`gles pour substituer a` la me'ritocratie le principe des petits arrangements entre amis.
Even "idea meritocracy" is just (besides the "radical truthfulness+transparency" and
algorithmic decision-making via e.g.the "dot collector") some weighting
of ideas by the "believability" of their authors because "who knows who is right"
still vertical society: people meritocracy
https://www.linkedin.com/pulse/key-bridgewaters-success-real-idea-meritocracy-ray-dalio/
https://www.ted.com/talks/ray_dalio_how_to_build_a_company_where_the_best_ideas_win
does stress-test opinions to make people search objective reasons and what others like on
particular predefined criteria (few dozen attributes e.g. "open-mindeness and assertiveness"
creative, unreliable) see list at 7min04
25-30% of people do not accept; most take 18 months to adapt-to/like this approach
guardrail people ; accountability); Do not lower the bar; Who is the responsible party?
https://inside.bwater.com/publications/principles_excerpt 23p
15: clearly-stated principles that are implemented in tools and protocols so that the
conclusions reached can be assessed by tracking the logic and data behind them.
static.klipfolio.com/ebook/bridgewater-associates-ray-dalio-principles.pdf 106p
G Ray Dalio principles pdf protocols
http://enacademic.com/dic.nsf/enwiki/20030 Voting system http://enacademic.com/dic.nsf/enwiki/567840
http://enacademic.com/dic.nsf/enwiki/53561
http://west.uni-koblenz.de/en/research/e-democracy
Cooperation protocal named logicocracy or "rationality-enhanced democracy"
since also method of governance.
* it is one way to do direct democracy
* However, although not dependant of the number of the community using it,
when its original part - its reliance on proof - cannot be used,
another direct democracy method must be used, hence
- not against local laws
- just a way to use rationality when possible,
- not for big national/poitical debates since not basable on proof ?
in logicocracy, any group of indiv can follow any rule they ALL want (unanimity)
if no unanimity, the logicocracy rules apply
regardless of the ownerships (nonsense of "my place, my rules")
national law can be ignored if
* proven incoherent / less optimal and
* unanimity (for a task/place/...) for that option
//logicocracy area delimited by area marking ?
consensus (compromis) ne veut pas dire unanimity.
Unlike capitalism (all ownerships to some individuals until they sell) and
socialism (all ownerships to state)
should working on a task entitles
* its agents a fair repartition of revenues (benefits) from the task
* temporary ownership of the instruments of the task ?
my place
IF all research task/method input/output/time/... are (link) compared (-* no positional argument)
then 1) the knowledge provider evaluation may be via the number of new relations
2) the method/model inventor evaluation may be via
the number of bested criteria for the task/model plus for its consequences
whistle blowers (lanceur d'alertes); 100% legal & principled ; tactful or not
spatial obj/action must have place ; actions ...; tool ...
no precision : 0 for eval; the types I need for eval/repr are ...
no KRL dependant since KRLO/IKL
toujours indiquer dates (min+max / exacte) de process (reunion/decision)
where:
- coop, not compet ?
- cf. end page
Appel Atelier SI et De'mocratie INFORSID 2017.pdf
@@@@https://www.irit.fr/~Umberto.Grandi/teaching/directdem/ reading group on e-Democracy
http://recherche.noiraudes.net/fr/
Fair Allocation of Indivisible Goods
http://strokes.imag.fr/whale3/
Umberto GrandI + Sylvain Bouveret (Ensimag, Grenoble-INP) - Whale, a platform for
collective decision-making
Whale3 is a web application dedicated to collective decision making based on voting theory
www.irit.fr/~Umberto.Grandi/e-democracy2017/
Toulouse e-Democracy Summer School
Following the summer school, a special issue will be edited in the International Journal
of Decision Support System Technologies, edited by Guy Camilleri, Guillaume Che`ze and
Florence Dupin de Saint-Cyr. Students will submit an extended version of their paper to
the guest editors. All the accepted submissions will be edited (after final validation)
in a book with ISBN number published by IRIT institute.
Guy Camilleri, Toulouse University, IRIT
Guillaume Che`ze, Toulouse University, IMT
Florence Dupin de St-Cyr, Toulouse Univ., IRIT
Pascale Zarate', Toulouse University, IRIT
https://www.researchgate.net/profile/Jeremy_Pitt/publications
./agentsPublics_protectionFonctionnelle_deontologie.pdf
see also in RIFIA 2015 trick that solves many constraints
https://pfia2017.greyc.fr/ethique/presentation //ethic
Emiliano Lorini Universite' Paul Sabatier IRIT, LILaC team): Une
the'orie formelle des agents moraux Je pre'senterai une the'orie des
attitudes et des e'motions morales incluant les valeurs morales, le
sentiment de culpabilite', la fierte' morale, le reproche et
l'approbation morale. Deux formalisations de la the'orie seront
propose'es. La premie`re utilise la logique modale, tandis que la
deuxie`me utilise la the'orie des jeux. Je montrerai comment ces deux
formalisations peuvent e^tre utilise's dans la pratique pour
construire des agents moraux.
Gauvain Bourgne (CNRS & Sorbonnes Universite's, UPMC Universite'
Paris 6, LIP6) : Approches logiques de la mode'lisation du
raisonnement e'thique - Travail conjoint avec Fiona Berreby et
Jean-Gabriel Ganascia Cette pre'sentation revient d'abord sur
quelques approches de mode'lisation des raisonnements e'thiques par
la programmation logique, pre'sentant certaines des proble'matiques
de mode'lisation ou d'expressivite' souleve'es par les questions
d'e'thiques, avant de proposer quelques principes me'thodologiques et
un cadre modulaire pour s'abstraire des cas d'e'tudes de dilemmes
e'thiques canoniques afin de proposer des mode`les plus ge'ne'raux des
grandes the'ories e'thiques. Nous insisterons en particulier sur la
ne'cessite' de bien identifier et diffe'rencier dans le mode`le ce qui
ressort de la dynamique du syste`me et ce qui ressort du jugement
e'thique.
For ethical/efficiency/... purposes, it should not atter whether a company is public or private.
https://en.wikipedia.org/wiki/Natural_law#Hobbes
https://en.wikipedia.org/wiki/Group_decision-making#Formal_systems
https://en.wikipedia.org/wiki/Group_decision-making#Group_discussion_pitfalls
https://en.wikipedia.org/wiki/Collaborative_decision-making_software
https://en.wikipedia.org/wiki/Deliberation
L'e'thique au CNRS, a` l'heure du nume'rique, Entretien avec Jean-Gabriel Ganascia
www.societe-informatique-de-france.fr/wp-content/uploads/2016/11/1024-no9-Ganascia.pdf
https://en.wikipedia.org/wiki/Blockchain_(database)
https://en.wikipedia.org/wiki/Consensus_(computer_science)#Some_consensus_protocols
!= https://en.wikipedia.org/wiki/Consensus_decision-making
Agreement vs. consent, epistemic
https://en.wikipedia.org/wiki/Group_decision-making#Formal_systems
https://en.wikipedia.org/wiki/Group_decision-making#Group_discussion_pitfalls
Decentralised consensus
Ideas: exchange from/to all including of info origin (contexts)
(I added info logical derivation as origin + consistency constraints wrt commitments
AND some form of coherence: se now in 1.0.2.2 or before)
https://en.wikipedia.org/wiki/E-democracy#Government_models
https://en.wikipedia.org/wiki/Collaborative_e-democracy
https://en.wikipedia.org/wiki/Decentralized_autonomous_organization
https://en.wikipedia.org/wiki/Ethereum https://en.wikipedia.org/wiki/Smart_contract
J. Pitt Distributive Justice for Self-Organised Common-Pool Resource Management
p: 8,9begin,9end ressource approp conflit resolution, with head and appeals by voters
https://en.wikipedia.org/wiki/Borda_count#An_example_2
** https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem
pre-specified criteria + 3 "fairness" criteria +
cf. "limiting the alternative set to two alternatives" + cf. end
** https://en.wikipedia.org/wiki/Voting_system#Evaluating_voting_systems_using_criteria
** https://en.wikipedia.org/wiki/Range_voting
** http://zesty.ca/voting/sim/
P S R
R P S R
Impressive. Hidden rules in the "autonomic Mechanisms/Equations"?
ACM Transactions on Internet Technology http://toit.acm.org/announcements.cfm
Call for Papers for a Special Section on COMPUTATIONAL ETHICS AND ACCOUNTABILITY
Intelligent Systems for Business Ethics and Social Responsibility
by generalization on fairness+effectiveness criteria:
provably minimalyComplete+consistent (and hence not voted not un-votable and auto-applying)
democratic/fair meta-meta-laws about decision-making (subtyped by meta-laws/law decision making;
laws subtype meta-law decision-making and prescribe decision making except for laws;
this permits decision-making not to follow meta-meta-laws if
fairly/democratically decided):
no decision for others (concerned people) without
i) informing them in advance ot the fact, rationale (criteria, ...),
decision-voting method (or method criteria and their weights), ...
at this time (before the decision), the laws(/status) can change if the laws or the meta-laws
allow the laws to be chaged before a decision; if so, they are always "democratic"/representative
ii) counter-proof that other proposed solutions are more "globally better (fair, ...) over time",
iii) allowing them to correct/change the decision (and its facts, method, ...)
for prooved "globally better one over time"
(greater good of the greater number). "equals should be treated equally, and unequals
unequally, in proportion to the relevant similarities and differences" [Aristotle BC 350].
"devoir de coherence" sub-criteria of accountability ?
Two kinds of preferences: 1) primivite/un-argumented 2) interre'tation based on facts
The 2nd ones cannot change without new elements
remove possibility to decide against the will of others
\. every coop rule? e.g., send mail to all rather than representatives
evaluation wrt each person: equality of points
with each person on each type of events with the average between
evaluations with one's own event grid (unchangeable/dynamic ?) and
evaluation with the other's grid (but now, how to solve disagreements
on the event types : average ?)
cours des comptes avis que consultatif
if decision i) accepted by some, not others, and ii) no consequence on all
then i) enforceable on those that accept, not enforceable on all
If unanimity, at least vote on who gain/looses what
else explore diff
INFORSID atelier "Syste`mes d'information et de de'cision et de'mocratie" 2017 2018
cf. Mail/k/coop
See ~/Mail/k/coop/8: deliberations, au LAMSADE
reducing nuisance power of everyone "just because they can" (muscle, role, ...)
power/capacity to cause (capability of causing) trouble/damage:harm
research question: reducing nuisance power (without reducing ...)
workflow: control if some tasks are actually/correctly/timely made
here: control if decision tasks are correctly made
if info collected, not ignored
lever: public, decision void (info on decision should be easy to find)
How avoiding spam/defense by multitude of illogical arguments ?
* no value to arguments marked as illogical if pseudo not confirmed (reputation)
* marking an argument as illogical and loosing is bad for reputation
* max 3 branches of unresolved arguments marked as illogical per pseudo
2017fin_presEsiroiToLycees.pdf
www.expat.com/fr/expat-mag/1889-la-democratie-dans-le-monde.html
http://www.expat.com/fr/expat-mag/1882-les-meilleures-destinations-pour-les-nomades-digitaux.html
principe de non/moindre agression -* d'optimisation du contentement
Le responsable n'est plus quelqu'un qui peut choisir sans justif+approbation (-* qui a du pouvoir)
mais quelqu'un qui trouve et propose des solutions d'optimisation du contentement
principle of least aggression (POLA)
http://blog.tenthamendmentcenter.com/2013/08/why-we-need-a-constitution/
"the Constitution is not a self enforcing document" it is a consistent set of limiting
principles which can be imposed upon government.
principle of least possible aggression (PLC: Path of Least Coercion)
https://groups.yahoo.com/neo/groups/Libertarian/conversations/topics/61262
goal of anarchists and libertarians but, for the last ones, reducing only 90% of gov is
sufficient
http://www.ronpaulforums.com/archive/index.php/t-333523-p-2.html TO PRINT along with
http://www.ronpaulforums.com/showthread.php?333523-The-Fundamental-Principles-of-Liberty&s=\
c949701fe4fd92e9dd40eb4cc1fb54a2
The enemy of liberty is ALWAYS coercion, not aggression
* No (choice between) constructive/non-coercitive decision/action
if "proved objection"
* No destructive/coercitive/segregation action (war, pollution,
pre-emptive action to prevent potential attack, anything that you would not do if
same was done to you at the same time, ...)
if unproved decision (even if no objection if the attackee cannot object) and
if no equivalent loss in balance to all beneficiaries
unless self-defense or defense of these rules or other things \. neg ?
BUT what about eating non-threatening animals' meat?
bred/!bred distinction (but diff with slavery?) + survival in wild (but diff with human
deficients?) ?
the two together are discriminant?
verbal_punition /^ coercion ? Yet but least one wrt future. What is "advantageous"/"due" future ?
punition_by_advantage_removal ! /^ coercion ? What is an "advantage" / "due" ?
Silly argument attempts showcasing irrationality:
- I am doing this for your own good
- everyone else does; that's the way we always did, ...
- he does ... since he (accidentally) belongs to ... (species/group/...)
- my home, my rules
!predef, !decision if !better than !decision, delay the choices
!redund unless explicitly stated
The ideal submission should provide evidence that context
improves the performance of systems on real-world applications and/or
provides useful insights and explanations on systems? output.
minimisation soufrance, max. satisfaction
*= min. ``power that !max satisf'',
*= max `efficiency that max satisf (=* !inconsis with satisf)
"Pref ?1: the more satisfied and less logically dissatisfied agents there are
and the more satisfied and less logically dissatisfied they are, the better"
\. ("the bigger the average of 'satisfactions and squares of logical dissatisfactions',
the better, with 'square' actually referring here to any non-linear function that
1. is of the form (n>1)^(linear_fct(number_of_satisfactions)), and
2; is `logically_adopted' by the `target_agents'
so that the bigger the intensity of the logical dissatisfaction of an agent,
the more it is taken int account";
/^ "the logical dissatisfactions should be more taken into account than satisfactions"
);
=* "preference should be made explicit and not be contradictories, e.g. via priorities"
"It should not matter who decides"
\. "'who (which agent) makes decisions on what' should have as little influence as possible
the global satisfaction of the people"
=* "the nuisance power of a decison maker should be minimized"
=* "an area decison maker not proposing/making decisions in this area should not affect
the global satisfaction of the people"
"the more persons a decision serve, the better"
"logico-democracy, not indirect democracy nor autocracy"
\. ne{"everyone should be able to propose a decision project",
"any proposed decision project should be adopted IFF if it is
proved (or un-argued) optimal wrt the pKBs
- incl. preferences between criteria - of the persons affected by the decision,
with preference for the project proposed by the decider"
=* "a decision project should indicate the set of criteria it is meant to satisfy",
\. ∀?agent ...
any evaluation (decision) should be invalidable by a counterproof within
15 days
=* any exam ...
*/