id
stringlengths 22
42
| metadata
dict | text
stringlengths 9
1.03M
|
---|---|---|
proofpile-shard-0030-0 | {
"provenance": "003.jsonl.gz:1"
} | Question
# Find the value of $m$ and $n$ using cross multiplication method:$3m+n=15$ and $m+2n=10$.A.$(4,3)$B.$(-4,3)$C.$(-4,-3)$D.$(4,-3)$
Verified
158.1k+ views
Hint: We have been given two equations. Write the equations in the form of $ax+by+c=0$. Then use $\dfrac{m}{{{b}_{1}}{{c}_{2}}-{{b}_{2}}{{c}_{1}}}=\dfrac{n}{{{c}_{1}}{{a}_{2}}-{{a}_{1}}{{c}_{2}}}=\dfrac{-1}{{{a}_{2}}{{b}_{2}}-{{a}_{2}}{{b}_{1}}}$. You will get the answer.
The multiplication of integers (including negative numbers), rational numbers (fractions) and real numbers is defined by a systematic generalization of this basic definition.
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths. The area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for instance multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis.
Solving proportions is simply a matter of stating the ratios as fractions, setting the two fractions equal to each other, cross-multiplying, and solving the resulting equation. The exercise set will probably start out by asking for the solutions to straight forward simple proportions, but they might use the "odds" notation.
Specifically in elementary arithmetic and elementary algebra, given an equation between two fractions or rational expressions, one can cross-multiply to simplify the equation or determine the value of a variable. Each step in these procedures is based on a single, fundamental property of equations. Cross-multiplication is a shortcut, an easily understandable procedure that can be taught to students.
The method is also occasionally known as the "cross your heart" method because a heart can be drawn to remember which things to multiply together and the lines resemble a heart outline.
In practice, though, it is easier to skip the steps and go straight to the "cross-multiplied" form.
So we have been given two equations.
$3m+n-15={{a}_{1}}x+{{b}_{1}}y+{{c}_{1}}=0$………… (1)
$m+2n-10={{a}_{2}}x+{{b}_{2}}y+{{c}_{2}}=0$……….. (2)
Now using the formula for cross multiplication,
$\dfrac{m}{{{b}_{1}}{{c}_{2}}-{{b}_{2}}{{c}_{1}}}=\dfrac{n}{{{c}_{1}}{{a}_{2}}-{{a}_{1}}{{c}_{2}}}=\dfrac{-1}{{{a}_{2}}{{b}_{2}}-{{a}_{2}}{{b}_{1}}}$
So from equation (1) and (2),
$\dfrac{m}{1\times (-10)-2\times (-15)}=\dfrac{n}{(-15)\times 1-3\times (-10)}=\dfrac{1}{3\times 2-1\times 1}$
$\dfrac{m}{-10+30}=\dfrac{n}{-15+30}=\dfrac{1}{6-1}$
Simplifying we get,
$\dfrac{m}{20}=\dfrac{n}{15}=\dfrac{1}{5}$
Now equating we get,
$\dfrac{m}{20}=\dfrac{1}{5}$
So simplifying we get,
$m=4$
Also, $\dfrac{n}{15}=\dfrac{1}{5}$
$n=3$
Here we get, $(m,n)=(4,3)$.
So the solution is $(4,3)$.
So the correct answer is option(A).
Note: Carefully read the question. Don’t be confused about the cross multiplication method. While simplifying, do not make mistakes. Don’t miss any term while solving. Take care that no terms are missing. |
proofpile-shard-0030-1 | {
"provenance": "003.jsonl.gz:2"
} | # What formula or rule has been used here?
I was in between proving a trigonometric identity but couldn't succeed. I went through the solution and saw this in between
\begin{align}\frac{\cos A \cos B}{\sin A \sin B}&= \frac{3}{1}\\\\ \frac{\cos A \cos B +\sin A \sin B}{\cos A \cos B - \sin A \sin B}&= \frac{3+1}{3-1}\end{align}
What happened there in the second step?
• Jul 17 '17 at 10:28
If we have $$\frac xy = \frac31$$ then this means, by definition of fractions, that $x = 3y$. This yields $$\frac{x+y}{x-y} =\frac{3y+y}{3y-y} = \frac{3+1}{3-1}$$ In your case, $x = \cos A\cos B$ and $y = \sin A\sin B$. |
proofpile-shard-0030-2 | {
"provenance": "003.jsonl.gz:3"
} | # Three.js: Lighting not calculating correctly on THREE.Geometry objects
I have a three.js (REVISION: '68') issue with the lighting of THREE.Geometry objects:
I'm using the THREE.Geometry class to build up objects using vertices and faces, then I computeFaceNormals() and computeVertexNormals() before adding it to the scene.
The lighting is obviously not calculating correctly, and it looks like the light calculates before the object is moved into position (or some other issue causing all objects to have identical lighting regardless of position).
My light code is:
hemiLight = new THREE.HemisphereLight( 0xffffff, 0xffffff, 0.6 );
hemiLight.color.setHSL( 0.6, 1, 0.6 );
hemiLight.groundColor.setHSL( 0.095, 1, 0.75 );
hemiLight.position.set( 0, 50, 0 );
dirLight = new THREE.DirectionalLight( 0xffffff, 1 );
dirLight.color.setHSL( 0.1, 1, 0.95 );
dirLight.position.set( -1, 1.75, 1 );
dirLight.position.multiplyScalar( 50 );
Note I also tried the documentation's sample-code for the Spotlight the issue persisted.
I am using a comparison of THREE.Geometry objects: the first with a single material for the at mesh creation, and the second with faces individually assigned a material, and passed to the mesh with THREE.MeshFaceMaterial(..) Note that I tried both single material Geometries and multi-material Geometries side-by-side and there is no difference.
Note I changed the color of one of my test cubes from blue to green compared to the header image.
The issue does not appear to be related to the multi-material code.
I understand that the Three.Geometry class behaves differently compared to BoxGeometry, etc. (For example, computeFaceNormals() and computeVertexNormals() needs to be explicitly called for Three.Geometry, but not for BoxGeometry). I think I may be missing some other difference around flagging the material/lighting/geometry for update.
My code to create my two plain test cubes is:
var testGeo = new THREE.Geometry();
testGeo.vertices.push(new THREE.Vector3(0,0,0));
testGeo.vertices.push(new THREE.Vector3(0,20,0));
testGeo.vertices.push(new THREE.Vector3(20,20,0));
testGeo.vertices.push(new THREE.Vector3(20,0,0));
testGeo.vertices.push(new THREE.Vector3(0,0,20));
testGeo.vertices.push(new THREE.Vector3(0,20,20));
testGeo.vertices.push(new THREE.Vector3(20,20,20));
testGeo.vertices.push(new THREE.Vector3(20,0,20));
testGeo.faces.push(new THREE.Face3(0,1,2));
testGeo.faces.push(new THREE.Face3(2,3,0));
testGeo.faces.push(new THREE.Face3(2,3,7));
testGeo.faces.push(new THREE.Face3(7,6,2));
testGeo.faces.push(new THREE.Face3(0,1,5));
testGeo.faces.push(new THREE.Face3(5,4,0));
testGeo.faces.push(new THREE.Face3(0,3,4));
testGeo.faces.push(new THREE.Face3(4,7,3));
testGeo.faces.push(new THREE.Face3(1,2,6));
testGeo.faces.push(new THREE.Face3(6,5,1));
testGeo.faces.push(new THREE.Face3(4,5,6));
testGeo.faces.push(new THREE.Face3(6,7,4));
testGeo.computeFaceNormals();
testGeo.computeVertexNormals();
var solidMatA = new THREE.MeshLambertMaterial({
color: 'blue'
})
solidMatA.side = THREE.DoubleSide;
var cubeA = new THREE.Mesh( testGeo, solidMatA );
cubeA.position.x = -40;
cubeA.position.y = -30;
cubeA.position.z = -30;
var testMaterialsListB = [];
var testGeo2 = new THREE.Geometry();
testGeo2.vertices.push(new THREE.Vector3(0,0,0));
testGeo2.vertices.push(new THREE.Vector3(0,20,0));
testGeo2.vertices.push(new THREE.Vector3(20,20,0));
testGeo2.vertices.push(new THREE.Vector3(20,0,0));
testGeo2.vertices.push(new THREE.Vector3(0,0,20));
testGeo2.vertices.push(new THREE.Vector3(0,20,20));
testGeo2.vertices.push(new THREE.Vector3(20,20,20));
testGeo2.vertices.push(new THREE.Vector3(20,0,20));
testGeo2.faces.push(new THREE.Face3(0,1,2));
testGeo2.faces.push(new THREE.Face3(2,3,0));
testGeo2.faces.push(new THREE.Face3(2,3,7));
testGeo2.faces.push(new THREE.Face3(7,6,2));
testGeo2.faces.push(new THREE.Face3(0,1,5));
testGeo2.faces.push(new THREE.Face3(5,4,0));
testGeo2.faces.push(new THREE.Face3(0,3,4));
testGeo2.faces.push(new THREE.Face3(4,7,3));
testGeo2.faces.push(new THREE.Face3(1,2,6));
testGeo2.faces.push(new THREE.Face3(6,5,1));
testGeo2.faces.push(new THREE.Face3(4,5,6));
testGeo2.faces.push(new THREE.Face3(6,7,4));
for (var i = 0; i < testGeo2.faces.length; i++)
{
var matB = new THREE.MeshLambertMaterial( {color: 'green'} );
matB.side = THREE.DoubleSide;
testMaterialsListB.push(matB);
}
testGeo2.computeFaceNormals();
testGeo2.computeVertexNormals();
var cubeB = new THREE.Mesh( testGeo2, new THREE.MeshFaceMaterial( testMaterialsListB) );
cubeB.position.x = -60;
cubeB.position.y = -30;
cubeB.position.z = -30;
Thanks!
• Since you are using three.js, you might consider making your code above a Stack Snippet to allow folks to reproduce the issue easily. Jan 28, 2015 at 17:48
• I reposted the question with JSFiddle to view the issue in live code: stackoverflow.com/questions/28215201/… Jan 29, 2015 at 13:09
Your triangles might be specified in a clockwise order instead of WebGL's preferred counter-clockwise order. To verify, switch your material side to THREE.FrontSide and run it again with THREE.BackSide and see if one gives you the correct results. If the THREE.BackSide works, then you have to go back and flip your ordering in all the Face3 creations.
Regarding shading looking similar with stacked objects: three.js does not take other objects into account when lighting each object in the render stage. Let's say you wanted to make a Minecraft clone with a bunch of boxes all stacked on top of one another. Well, three.js will calculate each individual box's lighting as if it were the only one in the world. Even if you stack them densely on top of each other, they will all look the same - light on top, dark beneath.
The lighting effect you are probably wanting is achieved by ShadowMaps in three.js. Shadowing portions of an object that are obscured by other objects is a complex and expensive task which is still being hashed out by the creators of three.js.
The normal ShadowMap mode in three.js looks OK for now however and it works in most cases.
Just to follow up in case others stumble across this:
While I did re-arrange the vertex order to ensure 90 degree perpendicular normals, I found that didn't make a material difference to adjacent object lighting.
The answer is that three.js doesn't intrinsically calculate lighting across adjacent objects in the scene graph, and while the shadowmap feature attempts to solve this, my investigations showed that a shadowmap solution didn't really work for my use case.
Instead, I applied only 1 of the two geometry calculations to get a simulated light effect
geo.computeFaceNormals();
geo.computeVertexNormals();
What I ended up using:
geo.computeFaceNormals();
//geo.computeVertexNormals();
With computeVertexNormals()...
Without:
While not a perfect light effect (very flat), it should get me by. Thanks for the help posters. |
proofpile-shard-0030-3 | {
"provenance": "003.jsonl.gz:4"
} | # Saturation vapor pressure calculation - HMT130
## HMT130 User Guide
Document code
M211280EN
Revision
D
Language
English (United States)
Product
HMT130
Document type
User guide
Saturation vapor pressure (Pws) is the equilibrium water vapor pressure in a closed chamber containing liquid water. It is a function only of temperature, and it indicates the maximum amount of water that can exist in the vapor state.
Water vapor saturation pressure (Pws) is calculated with the following 2 formulas:
where
T
Temperature in K
Ci
Coefficients
C0
0.49313580
C1
-0.46094296 * 10-2
C2
0.13746454 * 10-4
C3
-0.12743214 * 10-7
where
bi
Coefficients
b-1
-0.58002206 * 104
b0
0.13914993 * 101
b1
-0.48640239 * 10-1
b2
0.41764768 * 10-4
b3
-0.14452093 * 10-7
b4
6.5459673 |
proofpile-shard-0030-4 | {
"provenance": "003.jsonl.gz:5"
} | Receiver and transmitter with the same antenna [duplicate]
I want to build a simple transceiver (half duplex) for 433.92 MHz ASK modulation. I have found a transmitter chip and a receiver chip that I want to use. I want to use the same antenna for RX and TX but I am unsure of how to approach this (The receiver can handle the full output power of the transmitter). I do not want to buy a ready-made circuit for this, I'm doing this mostly to learn.
The antenna is 50 ohms but the receiver and transmitter both have their own impedances and need some matching network. If I used separate antennas this would be simple (say a PI attenuator).
But I’m unsure of how this would work with having both RX and TX on the same antenna. My very naïve approach to this would be to use a splitter/combiner after the matching networks and accept the -3 dB loss from this. This way every side sees 50 ohms.
But this feels very wrong and any input here would be much appreciated!
• If you can make it work satisfactorily, it is not wrong. But if you want to go there, an rf switch is another solution. Jan 25 '18 at 18:44
• Also, if you start over, you could use a transceiver chip. I've used the like of this chip in the past and they work very well. Jan 25 '18 at 18:52
• You can use a circulator. Jan 25 '18 at 19:49
A typical 434 MHz low cost transmission system will have a free-space path loss of: -
Loss (dB) = 32.45 + 20$log_{10}$(f) + 20$log_{10}$(d) (Friis equation in dB form)
Where f is in MHz and d is in kilometres.
Transmission distance will be about 0.1 km and the pathloss works out at: -
32.5 dB + 52.8 dB - 20 dB = 65 dB
But antennas will provide some gain (about 2 dB each end) so the free space figure is more like 61 dB. However, most RF engineers will add another 30 dB for fade margin and this means the overall path loss is about 90 dB.
Using a splitter at both ends will degrade the power transmitted by 3 dB and degrade the power received by 3 dB; a total of 6 dB.
You then have to ask yourself if that is acceptable or not. For most cases, simplicity overrides performance and it isn't a big deal. However, 6 dB is equivalent to halving the range from 100 m to 50 m. I can't tell you if this is good or bad. |
proofpile-shard-0030-5 | {
"provenance": "003.jsonl.gz:6"
} | # hoop: Object-Oriented Programming in Haskell
[ language, library, mit ] [ Propose Tags ]
Library for object-oriented programming in Haskell.
Versions [faq] 0.3.0.0 base (>=4.7 && <5.0), containers, haskell-src-exts (>=1.16), haskell-src-meta (>=0.6), lens (>=4.10), mtl (>=2.1), parsec (>=3.1.9), pretty, template-haskell (>=2.14), text [details] MIT Copyright (c) Michael B. Gale Michael B. Gale [email protected] Language https://github.com/mbg/hoop#readme https://github.com/mbg/hoop/issues head: git clone https://github.com/mbg/hoop by mbg at 2020-07-04T19:10:34Z 34 total (16 in the last 30 days) (no votes yet) [estimated by Bayesian average] λ λ λ Docs not available Last success reported on 2020-07-04
## Modules
• Language
• Language.MSH
• Language.MSH.BuiltIn
• Language.MSH.CodeGen
• Language.MSH.CodeGen.Class
• Language.MSH.CodeGen.Constructors
• Language.MSH.CodeGen.Data
• Language.MSH.CodeGen.Decls
• Language.MSH.CodeGen.Inheritance
• Language.MSH.CodeGen.Instances
• Language.MSH.CodeGen.Interop
• Language.MSH.CodeGen.Invoke
• Language.MSH.CodeGen.Methods
• Language.MSH.CodeGen.MiscInstances
• Language.MSH.CodeGen.New
• Language.MSH.CodeGen.NewInstance
• Language.MSH.CodeGen.Object
• Language.MSH.CodeGen.ObjectInstance
• Language.MSH.CodeGen.PrimaryInstance
• Language.MSH.CodeGen.Shared
• Language.MSH.CodeGen.SharedInstance
• Language.MSH.Constructor
• Language.MSH.MethodTable
• Language.MSH.NewExpr
• Language.MSH.Parsers
• Language.MSH.Pretty
• Language.MSH.QuasiQuoters
• Language.MSH.RuntimeError
• Language.MSH.Selectors
• Language.MSH.StateDecl
• Language.MSH.StateEnv
#### Maintainer's Corner
For package maintainers and hackage trustees
[back to package description]
# hoop
A Haskell library for object-oriented programming which allows programmers to use objects in ordinary Haskell programs. In particular, the library achieves the following design objectives (to avoid ambiguity with Haskell's type classes, we refer to classes in the object-oriented sense as object classes):
• No extensions to the Haskell language are required beyond what is already implemented in GHC. Object classes are generated from Template Haskell quasi quotations using an OO-like syntax where the methods are defined as ordinary Haskell expressions.
• Object classes can be instantiated from ordinary Haskell code (with an overloaded function named new). The resulting objects are ordinary Haskell values and can be used as such.
• Calling methods on objects can be done from within ordinary Haskell code.
• The objects do not rely on IO. Instantiating objects and calling methods on the resulting objects is pure.
• Object classes can inherit from other object classes, which also established subtyping relations between them. There is no limit to how deep these inheritance trees may grow.
• Class hierarchies are open for extension. I.e. the library does not need to know about all subclasses of a given class in order to generate the code for that class, allowing modular compilation.
• Casting from subtype objects to their supertypes is supported and the types are correctly reflected in Haskell's type system (e.g. assuming that we have Duck <: Bird and that obj :: Duck then upcast obj :: Bird) and pure.
• Type annotations are generally not required except where something would logically be ambiguous otherwise (e.g. instantiating an object with the new function).
## Examples
The test folder contains a number of examples of the library in action, illustrating the various features.
As a quick tutorial, a simple expression language can be implemented using the library as shown below. Note that the bodies of the two implementations of the eval method are ordinary Haskell expressions. The .! operator is an ordinary Haskell operator used to call methods on objects and this is just an ordinary Haskell definition, too.
[state|
abstract state Expr where
eval :: Int
state Val : Expr where
data val = 0 :: Int
eval = do
r <- this.!val
return r
data left :: Expr
data right :: Expr
eval = do
x <- this.!left.!eval
y <- this.!right.!eval
return (x+y)
|]
someExpr = new @Add (upcast $new @Val 4, upcast$ new @Val 7)
someExprResult :: Int
someExprResult = result (someExpr.!eval)
If we evaluate someExprResult, the result is 11 as expected. We can note some points of interest here that differ from popular object-oriented programming languages:
• The type annotations on someExpr and someExprResult are optional and just provided for clarity. The type applications for the calls to new are required (alternatively, type annotations on the sub-expression would work, too).
• Casts must be explicit: in the example, the objects of type Val must be explicitly cast to Expr values to instantiate the Add object.
• Since everything is pure, calling a method on an object produces two results: the result of the method call and a (potentially) updated object. The result function returns the result of calling eval on the someExpr object, discarding the resulting object.
• It does not matter what type of object we call eval on, as long as it is of type Expr or is a sub-type of Expr.
Indeed, we can cast the Add object to an Expr object, call eval on it, and still get the correct result:
> let e = upcast someExpr in result (e.!eval)
11
• Casting from supertype objects to a subtype is possible, but may fail (returning Nothing). E.g. assuming Duck <: Bird and that obj :: Bird then downcast obj :: Maybe Duck.
## Overview of the process
• QuasiQuoters.hs contains the entry point
• First, the state declarations are parsed (Parsers.hs) via parseStateDecl
• The parsed declarations are then passed to genStateDecls (Language.MSH.CodeGen.Decls / Decls.hs)
• This turns the declarations into a dependency graph (via buildStateGraph in Language.MSH.StateEnv / StateEnv.hs)
• If successful, the graph is written to graph.log
• The genStateDecl function is then applied to every state declaration in dependency order (i.e. starting from no dependencies) |
proofpile-shard-0030-6 | {
"provenance": "003.jsonl.gz:7"
} | # Webcut volume with non planar surface
Dear everyone,
For my previous problem Webcut failed for volume with nonplanar sheet, I finally fixed it by changing the spacing between each point when I create the surface. Then, webcut sheet extended command is able to cut the volume.
Since I need larger geometry, I recreate the geometry and use the same spacing size to create the surface. But again it is failed to cut the volume. I have tried to use webcut volume 1 with sheet body 2 and webcut volume 1 with sheet extended from surface 2. But both do not work. I have tried also to smooth the surface data. But it still does not work.
The error says :
WARNING: Cutting Sheet does not intersect the original volume.
The original volume is restored.
No volumes were webcut.
or
ERROR: /scratch/akuncoro/2021/mesh/sumatra_full/geometry_full.jou (155). Error in webcutting volume with sheet.
ERROR: /scratch/akuncoro/2021/mesh/sumatra_full/geometry_full.jou (155). ACIS API error number 21033
ACIS API message = inconsistent face-body relationships
No volumes were webcut.
I attach my .jou and .sat file here
alvina_problem_may.jou (1.3 KB)
surf_topo_3km.sat (1.9 MB)
surf_topo_full.sat (1.6 MB)
Is there any suggestion to fix this problem?
Thank you in advance.
-Alvina
Hi @alvinakk,
The primary issue appears to be related to the scale of the geometry. We recommend users to scale your geometry so that the smallest features you care about in the model are \approx \mathcal{O}(1). If I scale your geometry by 0.0001 then I’m able to apply the webcut. We then have a function that will scale your mesh on export to recover the actual dimension of your model.
For example:
reset
## Create a "small" geometry
bri x 0.000123 # Miles
## Scale geometry from Miles to Feet,
## which gives us an "easy" conversion to remember
## *and* makes our smallest "important" edges a size ~ 1
volume 1 scale 5280
## Mesh the volume
mesh volume 1
block 1 vol 1
## Setup option to scale mesh on export
transform mesh output scale {1/5280} # Uses APREPRO syntax to evaluate 1/5280
## Export the mesh, which will be scaled
export mesh "./transformed_mesh.e" overwrite
Note that transform mesh is multiplicative / additive depending on whether you’re doing a scale or translation. So doing transform mesh output scale 10 twice will scale by 100, not 10. Make sure to transform mesh output reset to reset.
Anyways, so if I scale your geometry even by just a factor of 0.0001 I am able to successfully cut your geometry:
# ----------------------------------------------------------------------
# Set units to SI.
# ----------------------------------------------------------------------
${Units('si')} # # ---------------------------------------------------------------------- # Reset geometry. # ---------------------------------------------------------------------- reset import Acis "surf_topo_full.sat" #import Acis "surf_topo_3km.sat"${idSurf=Id("surface")}
surface {idSurf} name "s_topo"
${idVol=Id("volume")} volume {idVol} name "v_topo"${idBody=Id("body")}
body {idBody} name "b_topo"
# ----------------------------------------------------------------------
# Create block for domain.
# ----------------------------------------------------------------------
# Block is 500 km x 500 km x 300 km
${blockLength=1700.0*km}${blockWidth=1000.0*km}
${blockHeight=300.0*km} brick x {blockLength} y {blockWidth} z {blockHeight}${idVol=Id("volume")}
volume {idVol} name "v_domain"
move volume v_domain location 700000 9800000 -100000 include_merged
volume all scale 0.0001
transform mesh output scale 10000
webcut volume v_domain with sheet body b_topo
delete volume {idBody}
# ----------------------------------------------------------------------
# Imprint all volumes, then merge.
# ----------------------------------------------------------------------
imprint all with volume all
merge all
# End of file
And here’s a picture of the meshed geometry
And of the bottom volume to show the cut surface
Dear @gvernon,
Thank you for your reply.
I can finally cut the volume now.
But because I still have another surface to cut the volume, so I try to rescale the volume after I cut the volume using this command and before I mesh the volume:
volume all scale 0.0001
webcut volume v_domain with sheet body b_topo
volume all scale 10000
Is it safe to do that? Because I see this report:
Cubit>volume all scale 10000
WARNING: Model may be corrupted from the scaling operation.
Consider healing it.
WARNING: Model may be corrupted from the scaling operation.
Consider healing it.
Finished Command: volume all scale 10000
Thank you in advance.
Alvina KK |
proofpile-shard-0030-7 | {
"provenance": "003.jsonl.gz:8"
} | ### jhasuraj01's blog
By jhasuraj01, history, 7 days ago,
Bitwise AND, OR and XOR are bitwise operators in C++ and Python that perform operations on the binary representation of numbers.
### Bitwise AND (&)
It compares each bit of the first operand to the corresponding bit of the second operand. If both bits are 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
Example: In C++:
int x = 12; // binary: 1100
int y = 15; // binary: 1111
int z = x & y; // binary: 1100, decimal: 12
In Python:
x = 12
y = 15
z = x & y
print(z) # Output: 12
### Bitwise OR (|)
It compares each bit of the first operand to the corresponding bit of the second operand. If at least one of the bits is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
Example: In C++:
int x = 12; // binary: 1100
int y = 15; // binary: 1111
int z = x | y; // binary: 1111, decimal: 15
In Python:
x = 12
y = 15
z = x | y
print(z) # Output: 15
### Bitwise XOR (^)
It compares each bit of the first operand to the corresponding bit of the second operand. If the bits are different, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.
Example:
In C++:
int x = 12; // binary: 1100
int y = 15; // binary: 1111
int z = x ^ y; // binary: 0011, decimal: 3
In Python:
x = 12
y = 15
z = x ^ y
print(z) # Output: 3
It's worth noting that these operations are done on the binary level and are generally faster than doing the operations on the decimal level.
• +4 |
proofpile-shard-0030-8 | {
"provenance": "003.jsonl.gz:9"
} | # Arbelos
In geometry, an arbelos is a plane region bounded by three semicircles with three apexes such that each corner of each semicircle is shared with one of the others (connected), all on the same side of a straight line (the baseline) that contains their diameters.[1]
An arbelos (grey region)
Arbelos sculpture in Kaatsheuvel, Netherlands
The earliest known reference to this figure is in Archimedes's Book of Lemmas, where some of its mathematical properties are stated as Propositions 4 through 8.[2] The word arbelos is Greek for 'shoemaker's knife'. The figure is closely related to the Pappus Chain.
## Properties
Two of the semicircles are necessarily concave, with arbitrary diameters a and b; the third semicircle is convex, with diameter a+b.[1]
Some special points on the arbelos.
### Area
The area of the arbelos is equal to the area of a circle with diameter ${\displaystyle HA}$ .
Proof: For the proof, reflect the arbelos over the line through the points ${\displaystyle B}$ and ${\displaystyle C}$ , and observe that twice the area of the arbelos is what remains when the areas of the two smaller circles (with diameters ${\displaystyle BA}$ ${\displaystyle AC}$ ) are subtracted from the area of the large circle (with diameter ${\displaystyle BC}$ ). Since the area of a circle is proportional to the square of the diameter (Euclid's Elements, Book XII, Proposition 2; we do not need to know that the constant of proportionality is ${\displaystyle {\frac {\pi }{4}}}$ ), the problem reduces to showing that ${\displaystyle 2(AH)^{2}=(BC)^{2}-(AC)^{2}-(BA)^{2}}$ . The length ${\displaystyle (BC)}$ equals the sum of the lengths ${\displaystyle (BA)}$ and ${\displaystyle (AC)}$ , so this equation simplifies algebraically to the statement that ${\displaystyle (AH)^{2}=(BA)(AC)}$ . Thus the claim is that the length of the segment ${\displaystyle AH}$ is the geometric mean of the lengths of the segments ${\displaystyle BA}$ and ${\displaystyle AC}$ . Now (see Figure) the triangle ${\displaystyle BHC}$ , being inscribed in the semicircle, has a right angle at the point ${\displaystyle H}$ (Euclid, Book III, Proposition 31), and consequently ${\displaystyle (HA)}$ is indeed a "mean proportional" between ${\displaystyle (BA)}$ and ${\displaystyle (AC)}$ (Euclid, Book VI, Proposition 8, Porism). This proof approximates the ancient Greek argument; Harold P. Boas cites a paper of Roger B. Nelsen[3] who implemented the idea as the following proof without words.[4]
### Rectangle
Let ${\displaystyle D}$ and ${\displaystyle E}$ be the points where the segments ${\displaystyle BH}$ and ${\displaystyle CH}$ intersect the semicircles ${\displaystyle AB}$ and ${\displaystyle AC}$ , respectively. The quadrilateral ${\displaystyle ADHE}$ is actually a rectangle.
Proof: The angles ${\displaystyle BDA}$ , ${\displaystyle BHC}$ , and ${\displaystyle AEC}$ are right angles because they are inscribed in semicircles (by Thales' theorem). The quadrilateral ${\displaystyle ADHE}$ therefore has three right angles, so it is a rectangle. Q.E.D.
### Tangents
The line ${\displaystyle DE}$ is tangent to semicircle ${\displaystyle BA}$ at ${\displaystyle D}$ and semicircle ${\displaystyle AC}$ at ${\displaystyle E}$ .
Proof: Since angle BDA is a right angle, angle DBA equals π/2 minus angle DAB. However, angle DAH also equals π/2 minus angle DAB (since angle HAB is a right angle). Therefore triangles DBA and DAH are similar. Therefore angle DIA equals angle DOH, where I is the midpoint of BA and O is the midpoint of AH. But AOH is a straight line, so angle DOH and DOA are supplementary angles. Therefore the sum of angles DIA and DOA is π. Angle IAO is a right angle. The sum of the angles in any quadrilateral is 2π, so in quadrilateral IDOA, angle IDO must be a right angle. But ADHE is a rectangle, so the midpoint O of AH (the rectangle's diagonal) is also the midpoint of DE (the rectangle's other diagonal). As I (defined as the midpoint of BA) is the center of semicircle BA, and angle IDE is a right angle, then DE is tangent to semicircle BA at D. By analogous reasoning DE is tangent to semicircle AC at E. Q.E.D.
### Archimedes' circles
The altitude ${\displaystyle AH}$ divides the arbelos into two regions, each bounded by a semicircle, a straight line segment, and an arc of the outer semicircle. The circles inscribed in each of these regions, known as the Archimedes' circles of the arbelos, have the same size.
## Variations and generalisations
example of an f-belos
The parbelos is a figure similar to the arbelos, that uses parabola segments instead of half circles. A generalisation comprising both arbelos and parbelos is the f-belos, which uses a certain type of similar differentiable functions.[5]
In the Poincaré half-plane model of the hyperbolic plane, an arbelos models an ideal triangle.
## Etymology
The type of shoemaker's knife that gave its name to the figure
The name arbelos comes from Greek ἡ ἄρβηλος he árbēlos or ἄρβυλος árbylos, meaning "shoemaker's knife", a knife used by cobblers from antiquity to the current day, whose blade is said to resemble the geometric figure. |
proofpile-shard-0030-9 | {
"provenance": "003.jsonl.gz:10"
} | # zbMATH — the first resource for mathematics
Attractors for second order lattice dynamical systems. (English) Zbl 1002.37040
The second order lattice system $\ddot{u}_i+h(\dot u_i)-(u_{i-1}-2u_{i}+u_{i+1})+\lambda u_i+f(u_i)=g_i,\quad i\in \mathbb{Z},$ is considered, where $$\lambda>0$$, $$(g_i)_i\in\ell^2$$, and the nonlinearities $$f$$ and $$g$$ satisfy some regularity and monotonicity assumtpions. The existence of global attractor in a suitable state space ($$\ell^2\times\ell^2$$) is established and its semicontinuity properties are studied.
##### MSC:
37L60 Lattice dynamics and infinite-dimensional dissipative dynamical systems 37L25 Inertial manifolds and other invariant attracting sets of infinite-dimensional dissipative dynamical systems
Full Text:
##### References:
[1] P. W. Bates, K. Lu, and, B. Wang, Attractors for lattice dynamical systems, preprint, 1999. [2] Afraimovich, V.S.; Chow, S.-N.; Hale, J.K., Synchronization in lattices of coupled oscillations, Phys. D, 103, 442-451, (1997) · Zbl 1194.34056 [3] Hale, J.K., Asymptotic behavior of dissipative systems, (1988), Amer. Math. Soc Providence · Zbl 0642.58013 [4] Temam, R., Infinite-dimensional dynamical systems in mechanics and physics, Appl. math. sciences, 68, (1988), Springer-Verlag New York · Zbl 0662.35001 [5] Feireisl, E., Global attractors for semilinear damped wave equations with supercritical exponent, J. differential equations, 116, 431-447, (1995) · Zbl 0819.35097 [6] Zhou, S., Dimension of the global attractor for damped nonlinear wave equations, Proc. amer. math. soc., 127, 3623-3631, (1999) · Zbl 0940.35038 [7] Ghidaglia, J.M.; Temam, R., Attractors for damped nonlinear wave equations, J. math. pure appl., 66, 273-319, (1987) · Zbl 0572.35071 [8] Karachalios, N.I.; Starrakakis, N.M., Existence of a global attractor for semilinear dissipative wave equations on Rn, J. differential equations, 157, 183-205, (1999) · Zbl 0932.35030 [9] Feireisl, E., Long time behavior and convergence for semilinear wave equations on Rn, J. dynam. differential equations, 9, 133-155, (1997) · Zbl 0879.35109
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
proofpile-shard-0030-10 | {
"provenance": "003.jsonl.gz:11"
} | ## Differential and Integral Equations
### Sub- and supersolutions for semilinear elliptic equations on all of $\mathbb{R}^n$
#### Article information
Source
Differential Integral Equations, Volume 7, Number 5-6 (1994), 1215-1225.
Dates
First available in Project Euclid: 23 May 2013
Brown, K. J.; Stavrakakis, N. Sub- and supersolutions for semilinear elliptic equations on all of $\mathbb{R}^n$. Differential Integral Equations 7 (1994), no. 5-6, 1215--1225. https://projecteuclid.org/euclid.die/1369329512 |
proofpile-shard-0030-11 | {
"provenance": "003.jsonl.gz:12"
} | # Measurement of cross sections and polarisation observables in η photoproduction from neutrons and protons bound in light nuclei
Witthauer, L.. Measurement of cross sections and polarisation observables in η photoproduction from neutrons and protons bound in light nuclei. 2015, Doctoral Thesis, University of Basel, Faculty of Science.
Preview
25Mb
Official URL: http://edoc.unibas.ch/diss/DissB_11534 |
proofpile-shard-0030-12 | {
"provenance": "003.jsonl.gz:13"
} | ### Ioannis Tsokanos (The University of Manchester)
Thursday, May 12, 2022, 11:10 – 12:00, -101
Abstract:
In this talk, we study the density properties in the real line of oscillating sequences of the form $( g(k) \cdot F(kα) )_{k \in \mathbb{N}}$, where $g$ is a positive increasing function and $F$ a real continuous $1$-periodic function. This extends work by Berend, Boshernitzan and Kolesnik who established differential properties on the function F ensuring that the oscillating sequence is dense modulo 1.
More precisely, when $F$ has finitely many roots in $[0,1)$, we provide necessary and sufficient conditions for the oscillating sequence under consideration to be dense in $\mathbb{R}$. All the related results are stated in terms of the Diophantine properties of $α$, with the help of the theory of continued fractions. |
proofpile-shard-0030-13 | {
"provenance": "003.jsonl.gz:14"
} | 81 views
Instead of walking along two adjacent sides of a rectangular field, a boy took a short cut along the diagonal and saved a distance equal to half the longer side. Then, the ratio of the shorter side to the longer side is $:$
1. $1/2$
2. $2/3$
3. $1/4$
4. $3/4$
1 Answer
option D correct ans
let length (l) > breath (b)
l+ b = root ( l^2 + b^2 ) + l/2
l/2 +b = root (l^2 + b ^2 )
square both side
we get
b/l= 3/4
by
0 votes
1 answer
1
134 views
0 votes
0 answers
2
94 views
0 votes
1 answer
3
75 views
0 votes
1 answer
4
81 views |
proofpile-shard-0030-14 | {
"provenance": "003.jsonl.gz:15"
} | ## 1. Add image to the slide page (bottom)
The following minimal working example shows how on can include an image in the title page using \titlegraphics command:
% Add image to the title page
\documentclass{beamer}
\usepackage{tikz}
\title{A new presentation}
\author{The author}
\institute{The Institute that pays him}
% Add the image inside titlegraphics macro
\titlegraphic{
\includegraphics[width=\textwidth]{Sample Image}
}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\end{document}
Compiling this code yields:
It should be noted that titlegraphics content will move the title page details (title, author, institute, etc) to the above. So sometimes we need to fix also the height of the image in the \includegraphics command using height=<value> (e.g. height=0.5\textwidth).
## 2. Add image to the slide page (top)
As the title of a presentation is positioned at the top of a title slide, we can include an image just before the title text inside \title{} command:
% Add image to the title page (top position)
\documentclass{beamer}
% Add the image inside title macro
\title{
\centering\includegraphics[width=\textwidth]{Sample Image}\\
A new presentation
}
\author{The author}
\institute{The Institute that pays him}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\end{document}
Compiling this code yields:
We reached the end of this tutorial, If you would like to add a background image to the title slide, I invite you to read this tutorial: “How do you add a background image in LaTeX Beamer? |
proofpile-shard-0030-15 | {
"provenance": "003.jsonl.gz:16"
} | ?
Free Version
Difficult
# Titration Curve for a Weak Acid
APCHEM-UDVYEN
A weak acid of unknown molarity is titrated with $NaOH$. The graph above was obtained.
At which of the following points on the titration curve is $[A^-]$ closest to twice that of $[HA]$?
A
A
B
B
C
C
D
D |
proofpile-shard-0030-16 | {
"provenance": "003.jsonl.gz:17"
} | # Diffie-Hellman exchange
So for a Diffie-Hellman problem, I am given the prime $p$ for the Diffie-Hellman exchange. I am also given $g$, secret number for machine as $A$, secret number for station as $B$, a Diffie-Hellman shielded Login Name $V$, and Diffie-Hellman shielded password $W$.
For this problem, I am given three users and see which one accessed files to which they had no clearance to. So I computed $x=g^A\pmod p$ and $y=g^B\pmod p$, then $x^A\pmod p$ and $y^B\pmod p$ which gave me my secret common key. Where I am confused is as to how to now unshield the DHS key and how I can use $V$ and $W$ to do so.
What do I do next?
According to a textbook I am reading, it says that the equation $DHS*u = 1\pmod p$ has a solution in $\mathbb N_p$ and this is the solution for $UDHS$, or the unshielding of DHS. But I am confused by what this means. Help is appreciated.
-
Actually, you should be computing $x^B \bmod p$ and $y^A \bmod p$ to derive the shared secret. – poncho Oct 22 '13 at 19:26
Within the DH protocol, there's no standard way to do "Diffie-Hellman shielded Logic name" and "password". It is certainly possible to design a protocol that uses DH which does it, however the details of that protocol are outside the Diffie-Hellman protocol. – poncho Nov 4 '13 at 17:14 |
proofpile-shard-0030-17 | {
"provenance": "003.jsonl.gz:18"
} | #### VT Markets APP
Trade CFDs on FX, Gold and more
### US Producer Prices Index fell unexpectedly, Reflecting a Drop in Energy Costs.
###### August 12, 2022
US stocks slid on Thursday and erased gains on speculation the rally that followed softer inflation data went too far, with Federal Reserve still setting monetary policy tight. A key measure of US producer prices unexpectedly fell for the first time in more than two years, mainly reflecting a drop in energy costs. A similar result to the consumer prices report on Wednesday, both the overall and core figures were softer than forecast. However, inflation remains stubbornly high and will likely keep the Fed on a hawkish path to curb it. Meanwhile, equities have been bolstered by a better-than-expected earnings season, and those companies that have trailed analysts’ estimates were rewarded with the biggest gains in at least five years.
The benchmarks, S&P500 and Dow Jones Average Industrial were both little changed down on Thursday after the market consumed CPI numbers. Five out of eleven sectors stayed in positive territory, as Energy and Financial sectors performed best among all groups, rising 3.19% and 1.02% respectively. It’s worth noting that big Tech underperformed as Nasdaq 100 more than 20% above its June lows, and the index slid 0.6% on daily basis for the day.
Main Pairs Movement
US dollar was slightly lower on Thursday, following a dramatic 1% loss the previous day when data showed U.S. inflation was not as hot as anticipated in July. The DXY index edged lower since the Asia trading session and touched a daily-low level below 104.6, and then rebounded to a level above 105.2.
The GBP/USD slid with a 0.11 % loss on daily basis, as the market amid a risk-off impulse while the greenback weakened. The cables witnessed fresh upbeat transactions during the Asian trading session and then lost bullish momentum and fell to a level below 1.220. Apart from that, investors needed to keep an eye out for the critical GDP report on Friday, to confirm the slowdown of economic growth across the UK. Meantime, EURUSD has turned sideways around 1.032, and the pairs advanced with a 0.2% gain for the day.
Gold declined with a 0.15% loss on daily basis, as Federal officials keep their hawkish stances. XAUUSD oscillate in a range from $1,783 to$1,799 marks. WTI and Brent oil both surged on Thursday, rising 2.62% and 2.13% respectively.
Technical Analysis
EURUSD (4-Hour Chart)
The EUR/USD pair advanced on Thursday, preserving its upside traction and extending its previous rebound toward the 1.036 mark after the release of softer-than-expected US PPI data. The pair is now trading at 1.03281, posting a 0.28% gain daily. EUR/USD stays in the positive territory amid a weaker US dollar across the board, as the easing US inflation figures lend support to market sentiment and kept the safe-haven greenback to remain on the back foot. The US Producer Price Index (PPI) declined to 9.8% every year in July, which came in lower than the market’s expectations and pushed the EUR/USD pair higher. For the Euro, European indexes struggle to post advances and the EUR/USD pair up for the fifth consecutive day.
For the technical aspect, the RSI indicator is 66 as of writing, suggesting that the upside is more favoured as the RSI stays above the mid-line. As for the Bollinger Bands, the price failed to climb higher but hovered around the upper band, therefore some upside traction can be expected. In conclusion, we think the market will be slightly bullish as the pair is testing the 1.0325 resistance line. A sustained strength above that level might open the road to additional gains.
Resistance: 1.0325, 1.0438, 1.0484
Support: 1.0282, 1.0158, 1.0111
GBPUSD (4-Hour Chart)
The GBP/USD pair edged higher on Thursday, failing to gather bullish momentum and remaining under pressure below the 1.225 mark during the US session amid risk-off market sentiment. At the time of writing, the cable stays in positive territory with a 0.11% gain for the day. The cooler-than-expected US inflation report and the upbeat US Initial Jobless Claims figure both exerted bearish pressure on the safe-haven greenback and underpinned the GBP/USD pair. The economic data showed that supply-chain conditions are improving and inflationary pressures on the wholesale side have also begun to ease. For the British pound, the Bank of England Chief Economist Huw Pill said on Thursday that higher rates in the short term could also mean some slowing in the UK economy.
For the technical aspect, the RSI indicator is 61 as of writing, suggesting that sellers remain on the sidelines as the RSI on the four-hour chart stays near 60. For the Bollinger Bands, the price failed to preserve upside traction and started to retreat, therefore a continuation of the downside trend can be expected. In conclusion, we think the market will be bearish as long as the 1.2248 resistance line holds. On the upside, if the pair climbs above that level and starts using it as support, bulls could show interest and lift the pair higher.
Resistance: 1.2248, 1.2317, 1.2381
Support: 1.2154, 1.2068, 1.2027
XAUUSD (4-Hour Chart)
Despite the renewed weakness witnessed in the US dollar amid the softer-than-expected US PPI report on Thursday, the pair XAU/USD struggled to gather bullish momentum and retreated to the $1,787 area to erase most of its daily gains during the US trading session. XAU/USD is trading at 1,789.87 at the time of writing, losing 0.13% daily. Signs that inflation might have peaked already continue to support speculations for a less aggressive policy tightening by the Fed, as the softer-than-expected US PPI data have also reinforced market expectations. For the time being, a 50 bps rate hike by the Fed seems likely in the September meeting. Moreover, the risk-on market mood might keep a lid on any further gains for the safe-haven metal. For the technical aspect, the RSI indicator is 52 as of writing, suggesting the pair’s indecisiveness in the near term as the RSI indicator stays near the mid-line. For the Bollinger Bands, the price witnessed fresh selling and dropped below the moving average, therefore the downside traction should persist. In conclusion, we think the market will be bearish as the pair is heading to test the 1785 support line. A break below that level could favour the bear skewed the risk to the downside. Resistance:$1,811, $1,822,$1,831
Support: $1,785,$1,769, \$1,756
Economic Data |
proofpile-shard-0030-18 | {
"provenance": "003.jsonl.gz:19"
} | Question
# The displacement of a particle executing SHM is given by$$y\,=\,5\,\sin \, 4t\,+\,\displaystyle \frac{\pi}{3}$$If $$T$$ is the time period and the mass of the particle is $$2$$ g, the kinetic energy of the particle when $$t\,=\,\displaystyle \frac{T}{4}$$ is given by
A
0.4 J
B
0.5 J
C
3 J
D
0.3 J
Solution
## The correct option is C 0.3 JThe displacement of particle, executing SHM is$$y\, = \, 5 \sin \, 4 t \, + \, \displaystyle \frac {\pi}{3}$$......(i)Velocity of particle$$\displaystyle \frac{dy}{dt} \, = \, \displaystyle \frac{5d}{dt} \, \sin \, 4t \, + \, \displaystyle \frac{\pi}{3}$$$$= \, 5 \, cos \, 4 \, t \, + \, \displaystyle \frac{\pi}{3}$$$$= \, 20 \, cos \, 4 \, t \, + \,\displaystyle \frac{\pi}{3}$$Velocity at $$t \,= \, \displaystyle \frac{T}{4}$$$$\displaystyle \frac{dy}{dt}_{t \, = \, \displaystyle \frac{T}{4}} \,= \,20 \, cos \, 4 \, \times \, \displaystyle \frac{T}{4} \, + \,\displaystyle \frac{\pi}{3}$$Or $$u \,= \, 20\, cos \, T \, + \,\displaystyle \frac{\pi}{3}$$.......(ii)Now, putting value of T in eQ. (II), WE GET$$U \, = \, 20 \cos \, \displaystyle \frac{\pi}{2} \, + \, \displaystyle \frac{\pi}{3}$$$$= \, - \, 20 \sin \, \displaystyle \frac{\pi}{3}$$$$= \, - \, 20 \, \times \, \displaystyle \frac{\overline{3}}{2}$$$$= \, - \, 10 \, \times \, \overline{3}$$The kinetic energy of particle,$$KE \, = \, \displaystyle \frac{1}{2} \, mu^2$$$$\because \, m \, = \, 2g \, = \, 2 \, \times \, 10^{-3} \, kg$$$$= \, \displaystyle \frac{1}{2} \, \times \, 2 \, \times \, 10^{-3} \, \times \, -10 \, \overline{3}^2$$$$= \, 10^{-3} \, \times \, 100 \, \times \, 3$$$$3 \, \times \, 10^{-1}$$$$KE \, = \, 0.3 \, J$$Physics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More |
proofpile-shard-0030-19 | {
"provenance": "003.jsonl.gz:20"
} | Theorem 8.6 The diagonals of a parallelogram bisect each other Given : ABCD is a Parallelogram with AC and BD diagonals & O is the point of intersection of AC and BD To Prove : OA = OC & OB = OD Proof : Since, opposite sides of Parallelogram are parallel. The Equation 2 gives. Verify your number to create your account, Sign up with different email address/mobile number, NEWSLETTER : Get latest updates in your inbox, Need assistance? Hence diagonals of a parallelogram bisect each other [Proved]. Thus the two diagonals meet at their midpoints. This is exactly what we did in the general case, and it's the simplest way to show that two line segments are equal. Thus the two diagonals meet at their midpoints. google_ad_client = "pub-9360736568487010"; . In AOD and BOC OAD = OCB AD = CB ODA = OBC AOD BOC So, OA = OC & OB = OD Hence Proved. Prove that the diagonals of a parallelogram bisect each other. That is, each diagonal cuts the other into two equal parts. Then the two diagonals are c = a + b (Eq 1) d = b - a (Eq 2) Now, they intersect at point 'Q'. We show that these two midpoints are equal. Start studying Geometry. In a quadrangle, the line connecting two opposite corners is called a diagonal. The position vectors of the midpoints of the diagonals AC and BD are (bar"a" + bar"c")/2 and (bar"b" + bar"d")/2. If possible I would just like a push in the right direction. The angles of a quadrilateral are in the ratio 3: 5: 9: 13. In this video, we learn that the diagonals of a parallelogram bisect each other. To prove that AC and BD bisect each other, you have to prove that AE = EC = BE = ED. In any parallelogram, the diagonals (lines linking opposite corners) bisect each other. Google Classroom Facebook Twitter Angles EDC and EAB are equal in measure for the same reason. Home Vectors Vectors and Plane Geometry Examples Example 7: Diagonals of a Parallelogram Bisect Each Other Last Update: 2006-11-15 ∴ diagonals AC and BD have the same mid-point ∴ diagonals bisect each other ..... Q.E.D. (please explain briefly and if possible with proof and example) google_ad_height = 90; Question:- The Diagonals diagonals of a parallelogram bisect each other. Does $\overline { AC }$ bisect two opposite corners ) bisect each other applicable to concave quadrilateral when! Angles which the meet is so concave quadrilateral prove that the diagonals of a parallelogram bisect each other when we attempt to that... And separates it into two equal parts is called a bisector mobile below! And more with flashcards, games, and more with flashcards, games, and other study tools =. Said, I was wondering if within parallelogram the diagonals diagonals of a quadrilateral bisect each other bisect... - Mathematics - TopperLearning.com | w62ig1q11 Thus the two diagonals meet at their midpoints 8.7 if the bisects. Below, for any content/service related issues please contact on this number figure above drag any vertex to the. In any parallelogram, the line connecting two opposite corners ) bisect each other, will. Diagonals bisects each other D E are 9 0 0 and Answer congruent to itself is'nt the sum... Parallelogram ABCD, shown in figure 10.2.13 a line that intersects another line segment and it... Lines linking opposite corners ) bisect each other terms, and more with flashcards, games, and with. Each diagonal cuts the other diagonal property not applicable to concave quadrilateral even when we can divide it two..., in the right direction number below, for any content/service related issues please contact on this number Twitter diagonals. Instance, please refer to the link, does $\overline { AC$... Diagonals ( lines linking opposite corners is called a bisector can divide into! A quadrilateral are in the given figure, LMNQ is a trapezium in which PQ issues please contact this. E is congruent to itself instance, please refer to the link, does $\overline { AC$! Your self this is so your mobile number below, for any content/service issues! Within parallelogram the diagonals bisect the angles of a parallelogram a quadrilateral are in the ratio 3::... Greycells18 Media Limited and its licensors, and other study tools push in given... Quadrilateral with AC and BD bisect each other diagonals bisect the angles of a parallelogram bisect other! Other - Mathematics - TopperLearning.com | w62ig1q11 Thus the two diagonals meet at midpoints! ∴ diagonals bisect the angles of a parallelogram bisect each other, have! Parallelogram the diagonals and call their intersection point E '' their midpoints call their intersection point ''. A bisector why is the angle sum property true for a concave quadrilateral the. Angles at point E are 9 0 0 and Answer \overline { AC } $bisect PQRS... Refer to the link, does$ \overline { AC } $bisect:., please refer to the link, does$ \overline { AC } bisect. ∴ the midpoints of the rectangle perpendicular which PQ reshape the parallelogram and convince self! - TopperLearning.com | w62ig1q11 Thus the two diagonals meet at their midpoints Thus the two diagonals meet at midpoints. Diagonals intersecting at O give your mobile number below, for any content/service related please! Issues please contact on this number Sign up for a concave quadrilateral this is so more with,. Intersection point E '' given a parallelogram is suited for class-9 ( Class-IX ) or grade-9 kids tools! Each other into two equal parts is called a diagonal intersecting at O in measure for the reason. Study tools, please refer to the link, does $\overline { AC$... Prove the diagonals of a parallelogram in which PQ then it is a parallelogram bisect each other, you to! Are 9 0 0 and Answer contact on this number two equal parts please explain briefly if. Parts is called a bisector 0 and Answer to the link, does $\overline { }... Are equal in measure for the same reason PQRS is a parallelogram bisect each other if parallelogram! Given a parallelogram bisect each other, you have to prove the diagonals AC BD. Ac and BD are the same mid-point ∴ diagonals bisect the angles the... Which the meet is'nt the angle sum property not applicable to concave quadrilateral that the of! The parallelogram and convince your self this is so angles of a bisect... Facebook Twitter ∴ diagonals bisect each other EDC and EAB are equal in measure for the same reason above quadrilateral! That being said, I was wondering if within parallelogram the diagonals of a parallelogram in which in! Your mobile number below, for any content/service related issues please contact on this.! Quadrangle, the diagonals of a parallelogram bisect each other Greycells18 Media Limited its. We are given a parallelogram ABCD, shown in figure 10.2.13 lesson, we will use triangles! Figure 10.2.13 of parallelogram bisect each other their midpoints other are the diagonals diagonals of parallelogram bisect other. Intersects another line segment and separates it into two equal parts two diagonals meet their. To itself 0 and Answer separates it into two equal parts are the diagonals ( lines linking opposite prove that the diagonals of a parallelogram bisect each other bisect. On below numbers, Kindly Sign up for a concave quadrilateral ️ by... That all four angles at point E are congruent and a E is congruent itself! Personalized experience intersection point E '' attempt to prove that AE = EC = BE = ED of! Vocabulary, terms, and other study tools a concave quadrilateral even when we can divide it into two parts! Diagonal cuts the other diagonal angles EDC and EAB are equal in measure for the reason... Question: - the diagonals of a parallelogram, PQRS is a parallelogram ABCD, in!: 9: 13 parallelogram in which PQ same mid-point ∴ diagonals AC and BD are diagonals intersecting O... Have to prove that AE = EC = BE = ED that intersects another line segment and separates it two! Are diagonals intersecting at O theorem 8.7 if the diagonals diagonals of parallelogram... If possible I would just like a push in the figure, LMNQ is a trapezium which. Two equal parts }$ bisect below numbers, Kindly Sign up for personalized! Answer to your question ️ prove by vector method that the diagonals AC and BD bisect each other are diagonals... Parallelogram, the diagonals of a parallelogram, the line connecting two opposite is... And D E are 9 0 0 and Answer another line segment separates. ( please explain briefly and if possible with proof and example ) Thank you linking opposite corners called... ) bisect each other, then it is a parallelogram bisect each.. Push in the given figure, LMNQ is a parallelogram bisect each other, then it is a bisect! Be = ED a call from us give your mobile number below, any! Can divide it into two equal parts is called a bisector the same reason for class-9 ( Class-IX or... The right direction 5: 9: 13 attempt to prove that the diagonals of the rectangle perpendicular that four... Example ) Thank you can divide it into two triangles, please refer the. A trapezium in which PQ rectangle bisect each other into two equal parts is called a bisector EAB equal... Self this is so in measure for the same reason the indicated coordinates show... ) Thank you, show the diagonals AC and BD bisect each other to the,! Is a trapezium in which, in the ratio 3: 5: 9: 13 or grade-9.... With AC and BD have the same Twitter ∴ diagonals bisect each other, B E and E... This video is suited for class-9 ( Class-IX ) or grade-9 kids this shows that the diagonals and call intersection... Ac } $bisect Answer to your question ️ prove by vector method that diagonals... 5: 9: 13 a call from us give your mobile number below for. Angles which the meet point E '' prove that AC and BD bisect each other then. Quadrilateral are in the ratio 3: 5: 9: 13 in a quadrangle, the diagonals bisects other... The parallelogram and convince your self this is so ( please explain briefly if! Point E are congruent and a E is congruent to itself opposite corners ) bisect each,! Figure above drag any vertex to reshape the parallelogram and convince your self is., we will use congruent triangles parallelogram bisect each other figure 10.2.13 point E congruent... Of the rectangle bisect each other same mid-point ∴ diagonals AC and BD have the same reason for... Diagonals meet at their midpoints and convince your self this is so prove that the diagonals of parallelogram! Vocabulary, terms, and more with flashcards, games, and other study tools point are. Intersection point E '' the angles of a parallelogram bisect each...... Abc D is an quadrilateral with AC and BD bisect each other - Mathematics TopperLearning.com... \Overline { AC }$ bisect proof and example ) Thank you like a push in the direction. The ratio 3: 5: 9: 13 reshape the parallelogram and convince your self this is so all!, show the diagonals of a parallelogram bisect each other of the rectangle bisect other...... Q.E.D number below, for any content/service related issues please contact this. 9 0 0 and Answer meet at their midpoints given above is quadrilateral ABCD and we to! Given a parallelogram bisect each other point E are 9 0 0 and Answer I just! To itself congruent to itself connecting two opposite corners is called a bisector = ED for personalized... The same reason hereto get an Answer to your question ️ prove vector. Any parallelogram, the diagonals of a square bisect each other in the ratio 3 5!
D.K. Metcalf Womens Jersey |
proofpile-shard-0030-20 | {
"provenance": "003.jsonl.gz:21"
} | 1. ## Statistics proof
Given that random variable $X$, its mean value $\mu$, its variance $\sigma ^2$, and the mean value $X^2$, which is $\mu _{X^2}$, prove that $\sigma ^2=\mu _{X^2} - \mu _X^2$.
Letting X be the values $x_1,x_2,...,x_n$ and $P(X = x_i) =p_i$, $(i=1,2,...,n)$
$\therefore \sigma ^2 = \sum_{i=1}^n (x_i-\mu _x)^2 \cdot p_i$
$= \sum_{i=1}^n x_i^2 \cdot p_i - 2\mu _X \cdot \bigg (\sum_{i=1}^n x_i \cdot p_i \bigg ) + \mu _X^2 \bigg ( \sum_{i=1}^n p_i \bigg )$ I'm pretty lost how to get this step and the rest.
$=\mu_{X^2}-2\mu _X \cdot \mu _X + \mu _X^2$
$=\mu _{X^2} - \mu _X^2$
I should have taken a picture of the book. That was a ton of code to write, even though it doesn't look like much at all.
2. Originally Posted by chengbin
Given that random variable $X$, its mean value $\mu$, its variance $\sigma ^2$, and the mean value $X^2$, which is $\mu _{X^2}$, prove that $\sigma ^2=\mu _{X^2} - \mu _X^2$.
Letting X be the values $x_1,x_2,...,x_n$ and $P(X = x_i) =p_i$, $(i=1,2,...,n)$
$\therefore \sigma ^2 = \sum_{i=1}^n (x_i-\mu _x)^2 \cdot p_i$
$= \sum_{i=1}^n x_i^2 \cdot p_i - 2\mu _X \cdot \bigg (\sum_{i=1}^n x_i \cdot p_i \bigg ) + \mu _X^2 \bigg ( \sum_{i=1}^n p_i \bigg )$ I'm pretty lost how to get this step Mr F says: Just expand!
and the rest. Mr F says: By definition: ${\color{red}\sum_{i=1}^n p_i = 1}$, ${\color{red}\sum_{i=1}^n x_i p_i = \mu_X}$ and ${\color{red}\sum_{i=1}^n x^2_i p_i = \mu_{X^2}}$.
$=\mu_{X^2}-2\mu _X \cdot \mu _X + \mu _X^2$
$=\mu _{X^2} - \mu _X^2$
I should have taken a picture of the book. That was a ton of code to write, even though it doesn't look like much at all.
.. |
proofpile-shard-0030-21 | {
"provenance": "003.jsonl.gz:22"
} | class resistics.time.reader_ats.TimeReaderATS(dataPath: str)[source]
Data reader for ATS formatted data
For ATS files, header information is XML formatted. The end time in ATS header files is actually one sample past the time of the last sample. The dataReader handles this and gives an end time corresponding to the actual time of the last sample.
Notes
The raw data units for ATS data are in counts. To get data in field units, ATS data is first multipled by the least significat bit (lsb) defined in the header files,
data = data * leastSignificantBit,
giving data in mV. The lsb includes the gain removal, so no separate gain removal needs to be performed.
For electrical channels, there is additional step of dividing by the electrode spacing, which is provided in metres. The extra factor of a 1000 is to convert this to km to give mV/km for electric channels
data = (1000 * data)/electrodeSpacing
Finally, to get magnetic channels in nT, the magnetic channels need to be calibrated.
Methods
setParameters() Set data format parameters dataHeaders() Headers to read in readHeaders() Specific function for reading the headers for internal format lineToKeyAndValue(line) Separate a line into key and value with = as a delimiter
dataHeaders(self)[source]
Return the data headers in the internal file format
Returns
Common headers with information about the recording
readHeader(self)[source]
setParameters(self) → None[source] |
proofpile-shard-0030-22 | {
"provenance": "003.jsonl.gz:23"
} | Cours de Jean-Pierre Serre, no. 1 (1981) , 204 p.
### Sommaire
History p. 1 Siegel formula p. 3 Tamagawa p. 8 I - Integration on $p$-adic manifolds page p. 9 Notation p. 9 Measure attached to a form $\omega$ p. 10 Rational summation p. 15 Igusa p. 16 Vector bundles p. 17 Smooth schemes p. 18 Number of points of classical and exceptional groups p. 20 Decomposition of a measure p. 24 Connection with densities p. 28 Digression : Liftable solutions p. 31 Smooth case p. 32 Real case p. 37 Use of resolution of singularities p. 40 Oesterlé : hypersurfaces p. 41 Łojasiewicz inequality p. 48 Oesterlé : general case p. 49 II - Adeles p. 55 History p. 55 Definition p. 55 $\mathbb{A}_K/K$ compact p. 58 Haar measure on $\mathbb{A}_K$ p. 61 Characters and duality p. 64 Haar measure for dual groups p. 66 Compatible measures with respect to duality p. 67 Adelic integration and heuristic formulas (Goldbach,...) p. 69 Adelic points of algebraic varieties p. 75 Properties of the functor $V\mapsto V(\mathbb{A}_K)$ p. 78 Restriction of scalars p. 84 Algebraic groups and adelic points p. 88 Abelian varieties (S. Bloch) p. 88 Weak approximation p. 90 Strong approximation p. 91 Adeles, classes and genera p. 99 Tensors page p. 102 Vector bundles p. 104 Adelic measures p. 106 Convergent case p. 108 Algebraic groups p. 111 Tori p. 114 Convergence p. 117 Tamagawa number (semi-simple case) p. 122 Theorem of Ono – Weil conjecture p. 123 Tamagawa number (reductive case) p. 126 $\tau(\mathbb{G}_m)=1$ p. 131 Tori (Ono) p. 133 $\tau(PGL_n)=n \tau(SL_n)$ p. 135 Tamagawa $\leftrightarrow$ Siegel (mass formula) p. 136 Tamagawa $\leftrightarrow$ Siegel (the two-groups game) p. 145 Correction to Ono's theory p. 146 Positive definite quadratic forms over $\mathbb{Q}$ p. 147 Proof that $\tau(SO_n)=2$ for $n=2,3,4$ p. 152 Proof of Siegel's formula p. 156 Proof ($m\geq 5$) p. 163 Proof ($m=4$) p. 165 Proof ($m=2$) p. 166 Proof ($m=3$) p. 169 Remarks on Siegel's proof p. 172 Application to modular forms p. 176 III - $SL_n$ p. 180 The Minkowski-Hlawka theorem p. 180 Proof p. 184
@book{CJPS_1981__1_,
author = {Serre, Jean-Pierre},
title = {Adeles and {Tamagawa} numbers},
series = {Cours de Jean-Pierre Serre},
publisher = {Harvard},
number = {1},
year = {1981},
language = {en},
url = {http://www.numdam.org/item/CJPS_1981__1_/}
}
TY - BOOK
AU - Serre, Jean-Pierre
TI - Adeles and Tamagawa numbers
T3 - Cours de Jean-Pierre Serre
PY - 1981
DA - 1981///
IS - 1
PB - Harvard
UR - http://www.numdam.org/item/CJPS_1981__1_/
LA - en
ID - CJPS_1981__1_
ER -
Serre, Jean-Pierre. Adeles and Tamagawa numbers. Cours de Jean-Pierre Serre, no. 1 (1981), 204 p. http://numdam.org/item/CJPS_1981__1_/ |
proofpile-shard-0030-23 | {
"provenance": "003.jsonl.gz:24"
} | # Fun with statistics – transformations of random variables part 2
I recently posted on how to find the distribution of functions of random variables, i.e., the distribution of $Y=g(X)$, where $X$ is a random variable with known distribution and $y=g(x)$ is some function.
As a natural extension of this concept we may ask ourselves what happens if we have two random variables involved. Let us start with one function of two random variables, i.e., given $X$ and $Y$ and knowing their joint PDF $f_{X,Y}(x,y)$ or their joint CDF $F_{X,Y}(x,y) = {\rm Pr}[X \leq x, Y \leq y]$ we would like to calculate the distribution of $Z = g(X,Y)$ where $z=g(x,y)$ is a function with two arguments, e.g., $z=x+y$.
Again, there are multiple ways of addressing this problem. A natural way would be to calculate the CDF of $Z$ directly, i.e., $F_Z(z) = {\rm Pr}[Z \leq z] = {\rm Pr}[g(X,Y) \leq z]$. In other words, we need to compute the probability of the event that relates to all realization of $X$ and $Y$ which satisfy $g(X,Y) \leq z$. This is easily done by integrating the joint PDF $f_{X,Y}(x,y)$ over all points in the set ${\mathcal D}_z$ which contains all points $(x,y)$ for which $g(x,y) \leq z$. Written out, we have
$$F_Z(z) = {\rm Pr}[Z \leq z] = {\rm Pr}[g(X,Y) \leq z] = \iint_{{\mathcal D}_z} f_{X,Y}(x,y) {\rm d}x {\rm d} y$$
Whether or not this approach is easy to follow depends on two things: (1) how easy it is to parametrize the set ${\mathcal D}_z$ and (2) how easy it is to integrate the joint PDF over ${\mathcal D}_z$.
Let us make an example considering the simple function $g(x,y) = x+y$. Then ${\mathcal D}_z$ contains all points $(x,y)$ for which $x+y \leq z$, i.e., $y\leq z-x$ or $x \leq z-y$. Geometrically, this is the set of points that are on the lower-left of a line with slope -1 and offset $z$, i.e., a line passing through $(z,0)$ and $(0,z)$. The integral over this set is relatively simple, as we can directly write it as
$$\displaystyle F_Z(z) = \int_{-\infty}^{+\infty} \int_{-\infty}^{z-y} f_{X,Y}(x,y) {\rm d}x {\rm d} y = \int_{-\infty}^{+\infty} \int_{-\infty}^{z-x} f_{X,Y}(x,y) {\rm d}y {\rm d} x$$.
Another example is $g(x,y) = \max(x,y)$. Since $\max(x,y) \leq z \Leftrightarrow ((x \leq z) \;\mbox{and} \; (y \leq z))$ we can argue
$$F_Z(z) = {\rm Pr}[\max(X,Y) \leq z] = \int_{-\infty}^z \int_{-\infty}^z f_{X,Y}(x,y) {\rm d}x {\rm d} y$$.
Geometrically, ${\mathcal D}_z$ contains all points on the “lower left” of the point $(z,z)$, i.e., the intersection of the half-planes below $y=z$ and left of $x=z$.
The second extension is to consider two functions of two random variables. Say we are given the distribution of $X$ and $Y$ via their joint PDF, we would like to find the joint PDF of $Z=g(X,Y)$ and $W=h(X,Y)$. There is a closed-form expresion for it as a direct extension of the closed-form expression for the PDF of one function of one random variable. It reads as
$$f_{Z,W}(z,w) = \sum_{i=1}^N \frac{1}{|{\rm det} \ma{J}(x_i,y_i)|} f_{X,Y}(x_i,y_i)$$,
where $(x_i,y_i)$ are all solutions to the system of equations $z=g(x,y)$, $w=h(x,y)$ in $x$ and $y$. Here, $\ma{J}$ is the Jacobian matrix given by
$$\ma{J} = \left[ \begin{array}{cc} \frac{\partial g}{x} & \frac{\partial g}{y} \\ \frac{\partial h}{x} & \frac{\partial h}{y} \end{array}\right]$$.
Moreover, the term ${\rm det} \ma{J}(x_i,y_i)$ means that we first compute the determinant of the Jacobian matrix (in terms of $x$ and $y$) and then insert $x_i(z,w)$ and $y_i(z,w)$.
Example? How about the joint distribution of $X+Y$ and $X-Y$? In this case, solving for $z=x+y$ and $w=x-y$ for $x$ and $y$ is simple, we have one solution given by $x_1 = (z+w)/2$ and $y_1 = (z-w)/2$. The Jacobian matrix is given by
$$\ma{J} = \left[ \begin{array}{cc} 1& 1 \\ 1 & -1 \end{array}\right]$$
and hence its determinant is $-2$ everywhere. This gives the solution for $f_{Z,W}(z,w)$ in the form
$f_{Z,W}(z,w) = \frac{1}{2} f_{X,Y}((z+w)/2,(z-w)/2)$.
As in the 1-D case, this direct solution depends heavily on our ability to solve the given functions for $x$ and $y$, which may be tedious for complicated functions.
Interestingly, the first case where we considered one function of one random variable can be solved also via this approach, simply by creating another “auxiliary” variable, and then marginalizing over it. So once we have $Z=g(X,Y)$ we make up another $W=h(X,Y)$, choosing it such that the remaining calculations are simple. For instance, for $g(x,y) = x+y$ we may choose $h(x,y) = y$. Then, the Jacobian matrix becomes
$$\ma{J} = \left[ \begin{array}{cc} 1& 1 \\ 0 & 1 \end{array}\right]$$
with determinant one. Moreover, we have $x_1 = z-w$ and $y_1 = w$. Therefore, we get
$$f_{Z,W}(z,w) = f_{X,Y}(z-w,w)$$.
The final step is marginalizing out the auxiliary $W$ which gives
$$f_Z(z) = \int_{-\infty}^{+\infty} f_{X,Y}(z-w,w) {\rm d}w.$$
Looks much like a convolution integral, doesn’t it? In fact, if $X$ and $Y$ are statistically independent, we can write $f_{X,Y}(x,y) = f_X(x) \cdot f_Y(y)$ and hence we obtain
$$f_Z(z) = \int_{-\infty}^{+\infty} f_{X}(z-w)\cdot f_Y(w) {\rm d}w = f_X(x) * f_Y(y),$$
where $*$ denotes convolution. This shows very easily that the PDF of the sum of two random variables is the convolution of their PDFs, if they are statistically independent. |
proofpile-shard-0030-24 | {
"provenance": "003.jsonl.gz:25"
} | # cdlib.algorithms.r_spectral_clustering¶
r_spectral_clustering(g_original: object, n_clusters: int = 2, method: str = 'vanilla', percentile: int = None) → cdlib.classes.node_clustering.NodeClustering
Spectral clustering partitions the nodes of a graph into groups based upon the eigenvectors of the graph Laplacian. Despite the claims of spectral clustering being “popular”, in applied research using graph data, spectral clustering (without regularization) often returns a partition of the nodes that is uninteresting, typically finding a large cluster that contains most of the data and many smaller clusters, each with only a few nodes. This method allows to compute spectral clustering with/withouth different regualarization functions designed to address such a limitation.
Supported Graph Types
Undirected Directed Weighted
Yes No No
Parameters: g_original – a networkx/igraph object n_clusters – How many clusters to look at method – one among “vanilla”, “regularized”, “regularized_with_kmeans”, “sklearn_spectral_embedding”, “sklearn_kmeans”, “percentile”. percentile – percentile of the degree distribution to perform regularization. Value in [0, 100]. Mandatory if method=”percentile” or “regularized”, otherwise None NodeClustering object
>>> from cdlib import algorithms
>>> import networkx as nx
>>> G = nx.karate_club_graph()
>>> coms = algorithms.r_spectral_clustering(G, n_clusters=2, method="regularized", percentile=20)
Zhang, Yilin, and Karl Rohe. “Understanding Regularized Spectral Clustering via Graph Conductance.” arXiv preprint arXiv:1806.01468 (2018). |
proofpile-shard-0030-25 | {
"provenance": "003.jsonl.gz:26"
} | # Prove $\varphi(x)$ to be primitive recursive
Let $\varphi(x)=2x$ if $x$ is a perfect square, $\varphi(x) = 2x+1$ otherwise. Show $\varphi$ is primitive recursive.
In proving $\varphi$ to be a p.r. function I think it could come in handy the following theorem:
Let $\mathcal C$ be a PRC class. Let the functions $g$, $h$ and the predicate $P$ belong to $\mathcal C$, let
$$f(x_1,\ldots, x_n) = \begin{cases} g(x_1, \ldots, x_n) \;\;\;\;\;\text{ if } P(x_1, \ldots, x_n)\\ h(x_1,\ldots,x_n) \;\;\;\;\;\text{ otherwise} \end{cases}$$ Then $f$ belongs to $\mathcal C$ because $$f(x_1, \ldots, x_n) = g(x_1, \ldots, x_n) \cdot P(x_1, \ldots, x_n) + g(x_1, \ldots, x_n) \cdot \alpha(P(x_1, \ldots, x_n))$$ where
$$\alpha(x) = \begin{cases} 1 \;\;\;\;\;\text{ if } x = 0\\ 0 \;\;\;\;\;\text{ if } x \neq 0 \end{cases}$$
and $\alpha(x)$ is p.r.
So similarly I would say that $\varphi(x)$ is p.r. as
$$\varphi(x) = \begin{cases} 2x \;\;\;\;\;\;\;\;\;\;\;\text{ if } x = t \cdot t \\ 2x+1 \;\;\;\;\;\text{ otherwise} \end{cases}$$ hence $$\varphi(x) = 2x \cdot P(x_1, \ldots, x_n) + (2x+1) \cdot \alpha(P(x_1, \ldots, x_n))$$ and $P$ is a primitive recursive predicate as $x \cdot y$ is p.r. and also $x = y$.
Does everything hold? Is there anything wrong? If so, since I am tackling this kind of exercise for the fist time, will you please tell me what's the proper way to solve this?
-
You need to show that you can write an "if,then,else"-function, i.e. $f(x,y,z) = y$ if $x$ is true and $z$ otherwise. Then you need to show that you can test whether or not $x$ is a perfect square. This you can do by iterating $i$ from 1 to $x$ testing if $i^2 = x$. – Pål GD Jan 17 '13 at 18:54
To see if $x$ is a perfect square is easy (for example, by adding 1 + 3 + 5 ... you get the succesive squares); once that is settled your problem is solved. Think of such problems primarily as programming asignments (in a rather cruel programming language). |
proofpile-shard-0030-26 | {
"provenance": "003.jsonl.gz:27"
} | Dolphins of the open ocean are classified as Type II Odontocetes(toothed whales). These animals use ultrasonic "clicks" with afrequency of about 54.8 kHz to navigateand find prey.
(a) Suppose a dolphin sends out a series ofhigh-pitched clicks that are reflected back from the bottom of theocean 61 m below. How much time elapsesbefore the dolphin hears the echoes of the clicks? (The speed ofsound in seawater is approximately 1530 m/s.)
1 s
(b) What is the wavelength of 54.8 kHzsound in the ocean?
2 mm |
proofpile-shard-0030-27 | {
"provenance": "003.jsonl.gz:28"
} | Math Help - general solution to differential equation
1. general solution to differential equation
For our practice final one of the questions is
Find the general solution to the following differential equation: y'' - 4y' + 4y =x2 - 3x + 2
I understand that one would add the particular solution and general solution. For the particular solution I got 1/4x2-1/4x+1/8 and for the general solution I got Ae2x+Be2x. However he gave us the solution and it was Ae-2x+Bxe-2x+1/4x2-1/4x+1/8. Does anyone see where I went wrong? I can show more work if thats helpful.
2. Re: general solution to differential equation
Originally Posted by JML2618
For our practice final one of the questions is
Find the general solution to the following differential equation: y'' - 4y' + 4y =x2 - 3x + 2
I understand that one would add the particular solution and general solution. For the particular solution I got 1/4x2-1/4x+1/8 and for the general solution I got Ae2x+Be2x. However he gave us the solution and it was Ae-2x+Bxe-2x+1/4x2-1/4x+1/8. Does anyone see where I went wrong? I can show more work if thats helpful.
Your mistake was in not dealing with the fact that your characteristic polynomial has a single root of order 2.
That makes your homogeneous solution of the form
$y[x]=c_1e^{2x}+c_2xe^{2x}$ (note the factor x in the 2nd term)
go use this form for your homogeneous solution and resolve your particular solution.
3. Re: general solution to differential equation
oh okay i understand now thanks for the reply! Do you use this whenever it's in the form y''+y'+y and ce^ax +ce^ax when it's y''+y'?
4. Re: general solution to differential equation
or is it that the roots are equal?
5. Re: general solution to differential equation
Originally Posted by JML2618
or is it that the roots are equal?
Pauls Online Notes : Differential Equations - Repeated Roots
6. Re: general solution to differential equation
Notice that $Ae^{2x}+ Be^{2x}= Ce^{2x}$ with C= A+ B. Those are not two independent solutions and cannot give the general solution to a second order equation. |
proofpile-shard-0030-28 | {
"provenance": "003.jsonl.gz:29"
} | ## Intermediate Algebra (6th Edition)
Published by Pearson
# Chapter 11 - Section 11.2 - Arithmetic and Geometric Sequences - Exercise Set: 14
#### Answer
$-5120$
#### Work Step by Step
To find the $n$th term of a geometric sequence, we use the formula $a_n = a_1 \cdot r^{n-1}$ where $a_1$ is the first term and $r$ is the common ratio. $a_6 = 5\cdot(-4)^{6-1} = 5\cdot(-4)^5 = 5\cdot(-1024) = -5120$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
proofpile-shard-0030-29 | {
"provenance": "003.jsonl.gz:30"
} | # Principle of superposition in uniform compression of bar
With the following quantities defined as follows:
Normal stress along x, $\sigma_x = \frac{F_{nx}}{S}$
Strain along x, $\epsilon_x= \frac{\Delta L_x}{L_x}$
and Poisson's Law: $\epsilon_y=\epsilon_z=-\nu \epsilon_x= -\nu\frac{\sigma_x}{E}$, as well as Hooke's Law: $\epsilon_x=\frac{1}{E}\sigma_x$,
with $F_{nx}, L_x,S, E, \nu$ being the normal force applied, length along x, area, Young's modulus and Poisson's coefficient respectively, we are trying to find the change of volume of a uniformly compressed parallelepiped $(\sigma_x=\sigma_y=\sigma_z=\sigma=-p)$.
I do not understand the passage: $\epsilon_x=\frac{\Delta L_x}{L_x}=\frac{\sigma_x}{E}-\frac{\nu}{E}(\sigma_y+\sigma_z)$ which is supposedly validated by the superposition principle. This already makes little sense to me because of Hooke's law implying $\frac{\nu}{E}(\sigma_y+\sigma_z)=0$. In this context, I interpret the superposition principle as the sum of the inputs(stresses) equalling the sum of the outputs (extensions). What's happening here? As you might guess, this is a new topic for me.
• I think I finally understand what you are asking. See the ADDENDUM to my answer below. Aug 30, 2017 at 12:27
The general equations for the three strains are: $$\epsilon_x=\frac{\sigma_x-\nu(\sigma_y+\sigma_z)}{E}$$ $$\epsilon_y=\frac{\sigma_y-\nu(\sigma_x+\sigma_z)}{E}$$ $$\epsilon_z=\frac{\sigma_z-\nu(\sigma_x+\sigma_y)}{E}$$ For the case of uniaxial loading $\sigma_x=\sigma$ in the x-direction, while the stresses in the y and z directions are zero ($\sigma_y=\sigma_z=0$),we get from the above three equations:$$\epsilon_x=\frac{\sigma}{E}$$ $$\epsilon_y=-\nu\frac{\sigma}{E}=-\nu\epsilon_x$$ $$\epsilon_z=-\nu\frac{\sigma}{E}=-\nu\epsilon_x$$
Now let's consider a different kind of loading where, instead of the load being only in the x direction, there are also equal stresses in the y and z directions, such that $$\sigma_x=\sigma_y=\sigma_z=\sigma$$If we substitute these into the three general equations for the strains in terms of the stresses, we obtain:$$\epsilon_x=\epsilon_y=\epsilon_z=(1-2\nu)\frac{\sigma}{E}$$This equation can also be obtained by starting with a uniaxial stress in the x-direction, then superimposing an equal stress in the y-direction, and then superimposing an equal stress in the z direction.
If $\sigma=-p$, the linear strains in the three directions are $$\epsilon=-(1-2\nu)\frac{p}{E}$$The volumetric strain $\epsilon_v$ is three times the linear strain, so,$$\epsilon_v=-3(1-2\nu)\frac{p}{E}$$ Since the original volume is $L_xL_yL_z$, the change in volume is $$\Delta V=-3(1-2\nu)\frac{p}{E}L_xL_yL_z$$ ADDENDUM
Here's another way of looking at it. Suppose you start out with a uniaxial stress of $\sigma_x$ on the body. Then, the strains in the three directions are $$\epsilon_x=\frac{\sigma_x}{E}$$ $$\epsilon_y=-\nu\epsilon_x=-\nu\frac{\sigma_x}{E}$$and$$\epsilon_z=-\nu\epsilon_x=-\nu\frac{\sigma_x}{E}$$ If, instead, you start out with a unixial stress of $\sigma_y$ on the body, then the strains in the three directions are $$\epsilon_y=\frac{\sigma_y}{E}$$ $$\epsilon_x=-\nu\epsilon_y=-\nu\frac{\sigma_y}{E}$$and$$\epsilon_z=-\nu\epsilon_y=-\nu\frac{\sigma_y}{E}$$
If, instead, you start out with a unixial stress of $\sigma_z$ on the body, then the strains in the three directions are $$\epsilon_z=\frac{\sigma_z}{E}$$ $$\epsilon_x=-\nu\epsilon_z=-\nu\frac{\sigma_z}{E}$$and$$\epsilon_y=-\nu\epsilon_z=-\nu\frac{\sigma_z}{E}$$
If, instead, you impose all three of these stresses simultaneously on the body, the strains you get are obtained by linearly superimposing (i.e., adding together) the three strains from the uniaxial stress cases: $$\epsilon_x=\frac{\sigma_x-\nu(\sigma_y+\sigma_z)}{E}$$ $$\epsilon_y=\frac{\sigma_y-\nu(\sigma_x+\sigma_z)}{E}$$ $$\epsilon_z=\frac{\sigma_z-\nu(\sigma_x+\sigma_y)}{E}$$
• Yes, the addendum clarified everything! Aug 30, 2017 at 19:17 |
proofpile-shard-0030-30 | {
"provenance": "003.jsonl.gz:31"
} | ## Reblog: Calculus Tidbits
[Feature photo above by Olga Lednichenko via Flickr (CC BY 2.0).]
This week I have a series of quotes about calculus from my first two years of blogging. The posts were so short that I won’t bother to link you back to them, but math humor keeps well over the years, and W. W. Sawyer is (as always) insightful.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
## Finding the Limit
Eldest daughter had her first calculus lesson last night: finding the limit as delta-t approached zero. The teacher found the speed of a car at a given point by using the distance function, calculating the average speed over shorter and shorter time intervals. Dd summarized the lesson for me:
“If you want to divide by zero, you have to sneak up on it from behind.”
## Harmonic Series Quotation
This kicked off my week with a laugh:
Today I said to the calculus students, “I know, you’re looking at this series and you don’t see what I’m warning you about. You look and it and you think, ‘I trust this series. I would take candy from this series. I would get in a car with this series.’ But I’m going to warn you, this series is out to get you. Always remember: The harmonic series diverges. Never forget it.”
—Rudbeckia Hirta
Learning Curves Blog: The Harmonic Series
quoting Alexandre Borovik
## So You Think You Know Calculus?
Rudbeckia Hirta has a great idea for a new TV blockbuster:
## Common Sense and Calculus
And here’s a quick quote from W. W. Sawyer’s Mathematician’s Delight:
If you cannot see what the exact speed is, begin to ask questions. Silly ones are the best to begin with. Is the speed a million miles an hour? Or one inch a century? Somewhere between these limits. Good. We now know something about the speed. Begin to bring the limits in, and see how close together they can be brought.
Study your own methods of thought. How do you know that the speed is less than a million miles an hour? What method, in fact, are you unconsciously using to estimate speed? Can this method be applied to get closer estimates?
You know what speed is. You would not believe a man who claimed to walk at 5 miles an hour, but took 3 hours to walk 6 miles. You have only to apply the same common sense to stones rolling down hillsides, and the calculus is at your command.
## Reblog: Patty Paper Trisection
[Feature photo above by Michael Cory via Flickr (CC BY 2.0).]
I hear so many people say they hated geometry because of the proofs, but I’ve always loved a challenging puzzle. I found the following puzzle at a blog carnival during my first year of blogging. Don’t worry about the arbitrary two-column format you learned in high school — just think about what is true and how you know it must be so.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
One of the great unsolved problems of antiquity was to trisect any angle using only the basic tools of Euclidean geometry: an unmarked straight-edge and a compass. Like the alchemist’s dream of turning lead into gold, this proved to be an impossible task. If you want to trisect an angle, you have to “cheat.” A straight-edge and compass can’t do it. You have to use some sort of crutch, just as an alchemist would have to use a particle accelerator or something.
One “cheat” that works is to fold your paper. I will show you how it works, and your job is to show why …
Claim your two free learning guide booklets, and be one of the first to hear about new books, revisions, and sales or other promotions.
## Reblog: Solving Complex Story Problems
[Dragon photo above by monkeywingand treasure chest by Tom Praison via flickr.]
Over the years, some of my favorite blog posts have been the Word Problems from Literature, where I make up a story problem set in the world of one of our family’s favorite books and then show how to solve it with bar model diagrams. The following was my first bar diagram post, and I spent an inordinate amount of time trying to decide whether “one fourth was” or “one fourth were.” I’m still not sure I chose right.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
Cimorene spent an afternoon cleaning and organizing the dragon’s treasure. One fourth of the items she sorted was jewelry. 60% of the remainder were potions, and the rest were magic swords. If there were 48 magic swords, how many pieces of treasure did she sort in all?
[Problem set in the world of Patricia Wrede’s Enchanted Forest Chronicles. Modified from a story problem in Singapore Primary Math 6B. Think about how you would solve it before reading further.]
How can we teach our students to solve complex, multi-step story problems? Depending on how one counts, the above problem would take four or five steps to solve, and it is relatively easy for a Singapore math word problem. One might approach it with algebra, writing an equation like:
$x - \left[\frac{1}{4}x + 0.6\left(\frac{3}{4} \right)x \right] = 48$
… or something of that sort. But this problem is for students who have not learned algebra yet. Instead, Singapore math teaches students to draw pictures (called bar models or math models or bar diagrams) that make the solution appear almost like magic. It is a trick well worth learning, no matter what math program you use …
Claim your two free learning guide booklets, and be one of the first to hear about new books, revisions, and sales or other promotions.
## Reblog: Putting Bill Gates in Proportion
[Feature photo above by Baluart.net.]
Seven years ago, one of my math club students was preparing for a speech contest. His mother emailed me to check some figures, which led to a couple of blog posts on solving proportion problems.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
## Putting Bill Gates in Proportion
A friend gave me permission to turn our email discussion into an article…
Can you help us figure out how to figure out this problem? I think we have all the information we need, but I’m not sure:
The average household income in the United States is $60,000/year. And a man’s annual income is$56 billion. Is there a way to figure out what this man’s value of $1mil is, compared to the person who earns$60,000/year? In other words, I would like to say — $1,000,000 to us is like 10 cents to Bill Gates. ### Let the Reader Beware When I looked up Bill Gates at Wikipedia, I found out that$56 billion is his net worth, not his income. His salary is $966,667. Even assuming he has significant investment income, as he surely does, that is still a difference of several orders of magnitude. But I didn’t research the details before answering my email — and besides, it is a lot more fun to play with the really big numbers. Therefore, the following discussion will assume my friend’s data are accurate… [Click here to go read Putting Bill Gates in Proportion.] ## Bill Gates Proportions II Another look at the Bill Gates proportion… Even though I couldn’t find any data on his real income, I did discover that the median American family’s net worth was$93,100 in 2004 (most of that is home equity) and that the figure has gone up a bit since then. This gives me another chance to play around with proportions.
So I wrote a sample problem for my Advanced Math Monsters workshop at the APACHE homeschool conference:
The median American family has a net worth of about $100 thousand. Bill Gates has a net worth of$56 billion. If Average Jane Homeschooler spends \$100 in the vendor hall, what would be the equivalent expense for Gates?
## Reblog: The Handshake Problem
[Feature photo above by Tobias Wolter (CC-BY-SA-3.0) via Wikimedia Commons.]
Seven years ago, our homeschool co-op held an end-of-semester assembly. Each class was supposed to demonstrate something they had learned. I threatened to hand out a ten question pop quiz on integer arithmetic, but instead my pre-algebra students voted to perform a skit.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
If seven people meet at a party, and each person shakes the hand of everyone else exactly once, how many handshakes are there in all?
In general, if n people meet and shake hands all around, how many handshakes will there be?
1-3 narrators
### Props
Each friend will need a sheet of paper with a number written on it big and bold enough to be read by the audience. The numbers needed are 0, 1, 2, 3, … up to one less than the number of friends. Each friend keeps his paper in a pocket until needed.
Claim your two free learning guide booklets, and be one of the first to hear about new books, revisions, and sales or other promotions.
## Reblog: In Honor of the Standardized Testing Season
[Feature photo above by Alberto G. Photo right by Renato Ganoza. Both (CC-BY-SA-2.0) via flickr.]
Quotations and comments about the perils of standardized testing, now part of my book Let’s Play Math.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
The school experience makes a tremendous difference in a child’s learning. Which of the following students would you rather be?
I continued to do arithmetic with my father, passing proudly through fractions to decimals. I eventually arrived at the point where so many cows ate so much grass, and tanks filled with water in so many hours. I found it quite enthralling.
— Agatha Christie
An Autobiography
…or…
“Can you do Addition?” the White Queen asked. “What’s one and one and one and one and one and one and one and one and one and one?”
“I don’t know,” said Alice. “I lost count.”
“She can’t do Addition,” the Red Queen interrupted. “Can you do Subtraction? Take nine from eight.”
“Nine from eight I can’t, you know,” Alice replied very readily: “but—”
“She can’t do Subtraction,” said the White Queen. “Can you do Division? Divide a loaf by a knife — what’s the answer to that?”
Claim your two free learning guide booklets, and be one of the first to hear about new books, revisions, and sales or other promotions.
## Reblog: The Case of the Mysterious Story Problem
[Feature photo above by Carla216 via flickr (CC BY 2.0).]
Seven years ago, I blogged a revision of the first article I ever wrote about homeschooling math. I can’t even remember when the original article was published — years before the original (out of print) editions of my math books.
I hope you enjoy this “Throw-back Thursday” blast from the Let’s Play Math! blog archives:
I love story problems. Like a detective, I enjoy sifting out clues and solving the mystery. But what do you do when you come across a real stumper? Acting out story problems could make a one-page assignment take all week.
You don’t have to bake a pie to study fractions or jump off a cliff to learn gravity. Use your imagination instead. The following suggestions will help you find the clues you need to solve the case… |
proofpile-shard-0030-31 | {
"provenance": "003.jsonl.gz:32"
} | # How do you simplify (x^2-x-6)/(4x^3)*(x+1)/(x^2+5x+5)?
Jul 18, 2015
Try factoring and find:
$\frac{{x}^{2} - x - 6}{4 {x}^{3}} \cdot \frac{x + 1}{{x}^{2} + 5 x + 5}$
$= \frac{\left(x - 3\right) \left(x + 2\right) \left(x + 1\right)}{4 {x}^{3} \left(x + \frac{5 + \sqrt{5}}{2}\right) \left(x + \frac{5 - \sqrt{5}}{2}\right)}$ (factoring)
$= \frac{{x}^{3} - 7 x - 6}{4 {x}^{5} + 20 {x}^{4} + 20 {x}^{3}}$ (multiplying)
#### Explanation:
Going in one direction, multiply up to get:
$\frac{{x}^{2} - x - 6}{4 {x}^{3}} \cdot \frac{x + 1}{{x}^{2} + 5 x + 5}$
$= \frac{\left({x}^{2} - x - 6\right) \left(x + 1\right)}{4 {x}^{3} \left({x}^{2} + 5 x + 5\right)}$
$= \frac{{x}^{3} - 7 x - 6}{4 {x}^{5} + 20 {x}^{4} + 20 {x}^{3}}$
Going in the other direction, factor to get:
$\frac{{x}^{2} - x - 6}{4 {x}^{3}} \cdot \frac{x + 1}{{x}^{2} + 5 x + 5}$
$\frac{\left(x - 3\right) \left(x + 2\right)}{4 {x}^{3}} \cdot \frac{x + 1}{{x}^{2} + 5 x + 5}$
$= \frac{\left(x - 3\right) \left(x + 2\right) \left(x + 1\right)}{4 {x}^{3} \left(x + \frac{5 + \sqrt{5}}{2}\right) \left(x + \frac{5 - \sqrt{5}}{2}\right)}$
No common factors to cancel, so this cannot be simplified. |
proofpile-shard-0030-32 | {
"provenance": "003.jsonl.gz:33"
} | # Generic properties of $p$-groups
I have the impression that most people in algebra believe the following statement to be true, but I have no reference for it.
Fix a natural number $n$. Consider for each prime $p$ the set of all groups of order $p^n$. Then is the following true?
• There is a $p'$ such that for all $p\geq p'$ the number of isomorphism classes of groups of order $p^n$ does only depend on n.
• We can write down generic presentation for all the groups of order $p^n$ with fixed $n$ and $p\geq p'$. That means that we can give a presentation where each word in the relation subgroup has the same form. It depends only on the chosen prime $p$.
• Furthermore many group theoretic properties are shared for groups with the same generic presentation (but for different primes). Is it true that these groups have the same nilpotency degree? Is it true that the sizes of the conjugacy classes depend polynomially on $p$? Is it true that the number of cojugacy classes of a certain subgroup also depends polynomially on $p$? What can be said about the ($G$-)poset of subgroups?
I'd like to know what is already known and maybe given a reference.
-
I doubt the first bullit point because $n\equiv 1\pmod p$ allows a very "$p$-specific" construction of a semidirect product – Hagen von Eitzen Jan 30 '13 at 10:18
But there are only finitely many such $p$. Now choose $p'$ bigger than any of these $p$. – Curufin Jan 30 '13 at 10:36
Do you mean "depends only on $n$" in the second point? – Martin Brandenburg Jan 30 '13 at 10:48
No. I fix $n$. Generic means for me: I have a presentation where I just replace $p$ with a certain prime. $\langle a\mid a^{p^n}\rangle$ is the generic presentation of a cyclic group with $p^n$ elements. – Curufin Jan 30 '13 at 10:58
This is a duplicate of math.stackexchange.com/questions/263876. Your first statement is false for $n=5$. – Derek Holt Jan 30 '13 at 12:18
Your conjectures are sort of true, but the reality is much more complicated than you have phrased it.
## Summary
Counting $p$-groups for large $p$ (compared to the nilpotency class) is the same as counting certain finite dimensional (restricted) Lie algebras. Such counts are organized into a tree, with smaller groups being the parents of larger groups. Such counts involve linear algebra and an orbit calculation. The orbit calculation involves counting points in a variety. In all cases known before 2011 or so, the orbit calculations were “PORC”, polynomial on residue classes, so that while no single number, and no single polynomial work, there are finitely many polynomials that work, and the choice of which polynomial is based solely on the residue of $p$ mod some fixed number $n$. Rather than organize $p$-groups by order, it may be better to organize them by coclass: the number of times a child is much larger than the parent. In 2012, a parent group was found so that the orbit calculations involved in a non-PORC way counting the points on a variety. Further investigation found that the conjugacy classes of the group and its descendants were also not PORC.
However, to my mind the calculations for $p^5$ (which are PORC) are extremely similar to these, so to me the essence of the conjecture still holds, but its specific expression is now known to be flawed. I found Vaughan-Lee's recent papers to be a very readable introduction to these ideas, though I learned them from the books mentioned below.
## Lazard correspondence
Given a $p$-group $G$, define the subgroups $G_{n+1} = [G,G_n] (G_n)^p$ with $G_1 = G$, so that $G_2 = \Phi(G)$, and $G_n/G_{n+1}$ is an elementary abelian $p$-group centralized by $G$. This called the lower exponent $p$ series. We define a restricted $p$-Lie algebra on $L(G) = \oplus_n G_n/G_{n+1}$ with $[a_i,b_j]_L = v$ and $(a_i)^p_L = w$ where $$v_{\ell} = \begin{cases} [a_i,b_j]_G & \ell = i+j \\ 0 & \text{otherwise} \end{cases} \qquad w_{\ell} = \begin{cases} (a_i)^p_G & \ell = i + 1 \\ 0 & \text{otherwise} \end{cases}$$ In other words, the commutator and $p$th power map are the same in $L(G)$ as in $G$, except that we have to be careful which quotient group everything happens in. If $G_p = 1$, then one can recover the group $G$ from $L(G)$ using the Baker-Campbell-Hausdorff formula for the exponential, so that counting $G$s is the same as counting $L(G)$s.
In Higman's case, this correspondence is fairly clear: $L(G) = G/\Phi(G) \oplus \Phi(G)$ and so every element of $L(G)$ has the form $(\bar g, h)$ and $[ (\bar a, b), (\bar c, d) ] = ( \bar 1, [a,c] )$ and $(\bar a,b)^p = (\bar 1, a^p)$. Any basis of $L(G)$ is a minimal generating set of $G$ (after taking any arbitrary choice of pre-images), and the restricted Lie algebra relations give the relations of the group. Higman showed that if $G_3=1$, then enumerating these Lie algebras was PORC.
## $p$-group generation algorithm
To organize the calculation, we view $G/G_n$ as the parent of $G/G_{n+1}$. Given a parent $G/G_n$ that we've already constructed, we try to find all descendants $G/G_{n+1}$. This calculation is described in O'Brien's 1990 article. Again, the gist is just linear algebra and orbit calculations, so one tends to get PORC behavior.
These techniques were used to correct earlier calculations of $p^6$, and to enumerate the groups of order $p^7$. In all cases the answers turn out to be PORC. Each presentation depends on $p$, occasionally requiring elements to be chosen from the orbit space of a variety over some characteristic $p$-field. The properties of each presentation are (more or less by definition of the $p$-group generating algorithm) the same: in particular the nilpotency class and $p$-class is constant on these varieties, indeed the entire structure of $G/G_n$ is constant.
## Coclass
Organizing $p$-groups by their order is in many ways a bad idea. Philip Hall suggested using iso-clinism as a better equivalence relation than order, and this method was used in much of the earlier work. However, the coclass proved to be a very nice unifying method. In many cases there is a single (parameterized) expression for an infinite sequence of groups, whose properties (nilpotency class and conjugacy classes) are parameterized in a very simple way. The coclass of $G$ is $n-c$ where $|G|=p^n$ and $c$ is the nilpotency class. A group of coclass 0 is cyclic of order $p$, and groups of coclass 1 are called maximal class. For $p=2$, these are the dihedral, the semidihedral, and quaternion groups; each of these have nice parameterized expressions, and most of the time one has an easy time dealing with the entire family. The coclass conjectures give nice information on the groups in each family using a $p$-adic space group (a single group with a simple presentation whose finite quotients are the mainline groups in that family). The non-mainline groups are topic of current study. du Sautoy's zeta functions, and Eick's computational research group have made significant progress on these groups.
## Non-PORC behavior
Recently a group of order $p^9$ whose descendants of order $p^{10}$ are not PORC was discovered by du Sautoy and Vaughan-Lee (2012). In a followup (fairly expository) paper they also show the number of conjugacy classes and the size of the automorphism group are not PORC. In two followup expository papers Vaughan-Lee revisits and simplifies Higman's original PORC calculations.
## Bibliography
Books
• Dixon, J. D.; du Sautoy, M. P. F.; Mann, A.; Segal, D. Analytic pro-p groups. Cambridge University Press, Cambridge, 1999. xviii+368 pp. ISBN: 0-521-65011-9 MR1720368 DOI:10.1017/CBO9780511470882
• Leedham-Green, C. R.; McKay, S. The structure of groups of prime power order. Oxford University Press, Oxford, 2002. xii+334 pp. ISBN: 0-19-853548-1 MR1918951
• Holt, Derek F.; Eick, Bettina; O'Brien, Eamonn A. “Handbook of computational group theory.” Chapman & Hall/CRC, Boca Raton, FL, 2005. xvi+514 pp. ISBN: 1-58488-372-3 MR2129747 DOI:10.1201/9781420035216
Articles
- |
proofpile-shard-0030-33 | {
"provenance": "003.jsonl.gz:34"
} | ## anonymous 3 years ago $\int _a^b \frac{dx}{y}$,where $y^2=ax^2+bx+c$
1. experimentX
reduce y in to this from (px + q)^2 + r
2. anonymous
where $y^2=ax^2+bx+c$
3. anonymous
sry i meant y^2
4. experimentX
do the same .. it wouldn't make any difference
5. anonymous
$1/a(x+b/2a)^2-b^2/4a+c/a 6. anonymous \[1/a(x+b/2a)^2-b^2/4a+c/a$
7. anonymous
i tried a euclidian sub of t $t=y+x\sqrt a$
8. experimentX
yeah .. then substitute u = Sqrt(a)(x + b/x) should be of the form |dw:1362742001381:dw|
9. anonymous
yeah to the first post or euclidean?
10. experimentX
??
11. anonymous
do you agree with the sub of$t=y+x\sqrt{a}$
12. experimentX
where did you get that y ... there is no such y.
13. experimentX
check your equation again ... don't put that y there ... there is no y involved. I think you are confused.
14. anonymous
i have to go now but i 'll try it |
proofpile-shard-0030-34 | {
"provenance": "003.jsonl.gz:35"
} | # Tag Info
29
It is an access hatch used during construction and maintenance. Credit: NASA-KSC Credit: NASA This part got at least some media coverage during the scrubbing of STS-121, when a Engine Cutoff (ECO) sensor, a fuel gauge, mounted behind that cover, inside the Liquid Hydrogen (LH2) tank, malfunctioned, causing that launch to be delayed, while the sensors were ...
27
I did a crude spreadsheet sim using the Rogers Commission report to get throttle times, to wit: Throttle down to 94% at 24 seconds Throttle down to 65% at 42 seconds Throttle up to 104% at 65 seconds I neglected startup propellant consumption and assumed step function throttling. I took liftoff O2 load to be 1,387,457 lb and H2 load to be 234,265 lb. I ...
25
That's the intertank - the cylinder that connected the bottom of the LO2 tank to the top of the LH2 tank. It didn't contain propellant, but did contain the forward interface with the Solid Rocket Boosters, and was built for lightness and strength, with skin-stringer construction. The ribs you see were the stringers. The intertank is a steel / aluminum ...
24
The breakup of Challenger occurred about 73 seconds into flight. Main engine cutoff normally occurs about 510 seconds into flight, implying that about 86% of the fuel would be remaining. (Many sources give 480 seconds, but I suspect that's a simple division of the tankage mass by the full-throttle consumption rate; looking at actual mission reports supports ...
22
The shuttle external tank held the propellants for the shuttle main engines. It was filled from spherical tanks positioned at the perimeter of the launch pad. Insulated lines ran from the spheres, through the Mobile Launcher, and into the Orbiter through two tombstone-shaped Tail Service Masts. Then through the Orbiter Main Propulsion System plumbing into ...
21
No, the previously used External Tanks (ETs) disintegrated in the atmosphere before they fell into the sea. Notably, Buzz Aldrin and others proposed different ideas for reuse of the tank in orbit, and allegedly NASA said that they would be willing to take external tanks to orbit if a private company would use them. No private effort ever stepped up to the ...
20
This part of the External Tank is called the "LH2 Tank aft dome". There are really two large circular penetrations on it. They are the ones offset from the center of the tank. One is the access hatch/manhole. (this description is from the LO2 tank part of the linked document. Further down it says the LH2 "manhole fitting was similar to those on both the ...
17
Based on this description of the Space Shuttle flight profile, no external tank would ever have completed so much as a single orbit. An external tank would achieve essentially the same orbital apogee as the orbiter itself, but that is all. The shuttle fired its OMS engines to achieve an actual orbit AFTER tank separation. This means that the tank remained on ...
13
I don't have a great reference for this, but it was to reduce cost on the throw-away External Tank. By using the same interface into the Orbiter used to supply propellants to the main engines, the cost and complexity of adding a dedicated loading interface to the tank was avoided. It was not a tremendous complexity hit to the Orbiter Main Propulsion system ...
13
No reuse, but... NASA did not at any point actually reuse an external tank in any way. However... They made plans to allow reuse in orbit. NASA did have tentative plans for utilization of the tanks in orbit. These plans were scrapped, however. The primary factors being (1) decreased payload capacity to stable orbit †, (2) risk of insulating foam falling ...
12
tl;dr - the parts at the rear of ET-94 where the foam was removed were painted orange for display. The foam was not dyed but started out a light cream color. It slowly turned orange when exposed to light. Here is a picture of foam that was trimmed off during the stringer crack problem on STS-133. You can see the internal foam is lighter, and the metal is ...
10
This was when planning safe disposal of the External Tank on certain types of launch abort. Source: Space Shuttle Abort Evolution page 9-10 A major design activity was conducted preparing for Shuttle launches on the West coast from Vandenberg AEB. Though not flown due to new program directions following the Challenger accident, a lot of design work and ...
8
As a design decision, if you throw away the External Tank and it has engines attached, you are throwing away the engines. Since the Orbiter was returning for sure anyway, the decision was to leave the engines on the orbiter. Once the ET is done its job, the engines are not needed to make orbit. The OMS pods provide enough punch for the needed orbital ...
8
Setting aside the political/managerial issues this is about mass. The final version of the tank had a "dry" (empty tanks) mass of about 26.5 tons and a fuelled mass of about 760 tons. The surface area is a roughly 2600 $m^2$ and the total mass of the thermal protection system is just over 2 tons (all masses and dimensions from wikipedia). So any replacement ...
7
The original External Tank nose cone design was indeed blunt - almost like a fireplug, as seen in this 1975 concept art. [Image source - lost in the mists of time to me, but NASA somewhere] However, wind tunnel testing at the Arnold Engineering Development Center revealed that this configuration caused unsteady aerodynamic buffeting at some conditions. A ...
7
A few facts: SRB Burn Time is 127 seconds Start of Challenger Incident- 64 s Vehicle breakup- 72 s Nominal time to orbit- 510 s. So the SRBs were about half-way done with their burn time before the vehicle started to break up. The Space Shuttle Main Engine actually produced less than 1 g thrust until about the time of SRB separation. If the SRBs could have ...
7
A big issue with boosting the tank to orbit, would be the foam insulation. It was believed it would come off in chunks, like popcorn, causing immense amounts of orbital debris, potentially in the orbit you wished to store it at. Which could be really bad news.
6
Reusability. The whole idea of the shuttle was to discard all the parts that are simple, cheap and easy to replace and recover everything expensive, complex and hard to replace. Of course the reality, involving meddling by parties other than NASA, never mind failures in the process the shuttle was designed (not so much the design itself as the process of ...
5
Safety distances are decided by modelling the worst-case scenario (an explosion of the rocket right on the launch pad). An explosion results in an overpressure which drops off as distance increases, the safe distance is one where the overpressure is limited to a survivable level. The same goes for structures: you can calculate how much overpressure a ...
5
There were three attach points. The forward bipod that you show in your answer, and two aft attach points. At each attach point a large explosive bolt held the tank and Orbiter together. Large umbilical door openings in the aft of the Orbiter let the aft bolts pass through and also had all the fluid and electrical connections. After separation tile-covered ...
5
There are generally speaking 2 conops to get into orbit with the Space Shuttle. The first one, and most commonly used, requires two thrusts post Main Engine CutOff (MECO) to achieve orbit. This one drops the ET in the Indian Ocean. The "Direct Insert" requires only one OMS maneuver, and the tank landed in the Pacific Ocean. So the External tank might ...
4
What the equations I used completely ignore is initial thrust to gross launch mass which'd surely affect gravity drag? ... What is the assumption behind the 9.7 km/s delta-v on the Wikipedia page as to gravity drag fraction and initial launch acceleration? 9.7km/s is towards the high end of delta-v to orbit requirements. It varies with both the aerodynamics ...
3
Launch vehicles don't stage to get the earlier stages out of the way, they stage to get rid of excess mass so they can actually reach orbit with a useful payload. The Shuttle ET was a bit of an exception due to various compromises in its design, and a poor design in that it hauled 27 metric tons of mass up to just short of a circular orbit that could ...
3
Your question is based on a misunderstanding- the External Tank propellant tanks did in fact have relief valves. The 02 valve relieved at 24 psid and the H2 valve relieved at 36 psid. From the 1982 Press Manual, pages 92-95 GremlinWrangler's comment about the inadvisability of venting hydrogen is well founded - see Flight Rule A5-154 whose rationale ...
2
I believe you are right, it is the External Tank. Why does it have problems? The external tanks were deposited in either the Indian or Pacific Oceans, clear across the world. So, what would that do for a low southward inclination launch from Vandenburg? It would cut across South America, then over Africa, the Middle East, and Russia, with only a brief stay ...
Only top voted, non community-wiki answers of a minimum length are eligible |
proofpile-shard-0030-35 | {
"provenance": "003.jsonl.gz:36"
} | Limited access
Two identical stars of mass $M$ and radius $R$ are separated by a distance $d$ ($d>>R$). The two stars are in circular orbit around their combined center of mass. They are observed to be moving at a speed ${v}_{1}$.
Two other identical stars each with a mass of $2M$ (but with the same radius $R$) are found to be in the same configuration; specifically separated by a distance $d$ and orbiting their combined center of mass. They are moving at a speed ${v}_{2}$.
What is the ratio of $\cfrac {{v}_{2}}{{v}_{1}}$?
A
$\cfrac{1}{4}$
B
$1$
C
$\sqrt {2}$
D
$2$
Select an assignment template |
proofpile-shard-0030-36 | {
"provenance": "003.jsonl.gz:37"
} | Using normal approximation to estimate the probability of winning more than you lose in 100 plays
You play a game where if the two cards you pick from a deck of cards (without replacement) have consecutive rank (i.e. 2 and 3, A and K, A and 2, etc.), you win. The game pays 81 USD to 5 USD, meaning if you win, you have a net gain of 81 USD and if you lose, you lose 5 USD. You have two scenarios: play 10 times or play 100 times.
I calculated the chance of winning to be $\binom {2}{1} \frac {52}{52} *\frac {4}{51} =\frac {8}{51}$
I then calculated the winning E(X)=$100* \frac{8}{52} =15.6$ SE(X)= $\sqrt {(\frac{8}{51}*\frac{43}{51})} *\sqrt{100}=3.64$
The solution given for the P(win more than lose) is 0.997
How can the probability of winning more than losing be the case when we only win 15.6 games out of 100 plays?
What am I missing here?
Thank you!
• In the $15.6$ games you win you win $\$81\cdot15.6=\$1263.6$ and in the $84.4$ games you lose, you only lose $\$5\cdot84.4=\$422.$ Where can I go to play this game? – saulspatz Jul 9 '18 at 4:56
• lol only in the land of arbitrary math problems. How did they arrive the 0.997 approximation though? If we use the formula $\frac{Observed \, Value - 15.6}{3.64}$ what would we plug in for the Observed value though? – pino231 Jul 9 '18 at 6:36 |
proofpile-shard-0030-37 | {
"provenance": "003.jsonl.gz:38"
} | # Atomes froids dans des réseaux optiques - Quelques facettes surprenantes d'un système modèle
Abstract : This thesis is devoted to the experimental study of atoms trapped and cooled in several types of optical structures. In order to characterise these media, we used different techniques such as time-of-flight techniques, direct imaging of the atomic
cloud and pump-probe spectroscopy. We thus obtained information about kinetic temperature, spatial diffusion of atoms and atomic motion in the optical potential wells.
We first studied the dynamics of cesium atoms in three-dimensional bright optical lattices when a magnetic field is applied. In particular, we showed that optical lattices operating in the jumping regime do provide good trapping and cooling efficiencies and that a motionnal narrowing effect gives birth to narrow
vibrational sidebands on pump-probe transmission spectra. Still with cesium atoms, we created and characterised a three-dimensional bright optical lattice obtained with only two laser beams through the Talbot effect, and also a random medium
generated by a speckle field.
We endly studied a brownian motor'' for $^(87)$Rb atoms in a grey asymmetric potential. The results of the experimental study are in good qualitative agreement with semi-classical Monte-Carlo numerical simulations.
Keywords :
Document type :
Theses
https://tel.archives-ouvertes.fr/tel-00006734
Contributor : Cécile Robilliard <>
Submitted on : Tuesday, August 24, 2004 - 12:26:07 PM
Last modification on : Thursday, December 10, 2020 - 12:37:00 PM
Long-term archiving on: : Friday, April 2, 2010 - 8:27:16 PM
### Identifiers
• HAL Id : tel-00006734, version 1
### Citation
Cécile Mennerat-Robilliard. Atomes froids dans des réseaux optiques - Quelques facettes surprenantes d'un système modèle. Physique Atomique [physics.atom-ph]. Université Pierre et Marie Curie - Paris VI, 1999. Français. ⟨tel-00006734⟩
Record views |
proofpile-shard-0030-38 | {
"provenance": "003.jsonl.gz:39"
} | • Create Account
### #ActualBaneTrapper
Posted 03 October 2012 - 08:24 AM
I' once had the exact same problem. The chance is that you are linking against release library when you are actually debugging.
http://en.sfml-dev.o...g60228#msg60228
I believe this would do it, but am going to work atm, Thx for help.
EDIT:: SFML WORKS!
Name is fixed, my error was:
while at Project properties, Linker, Input, Aditional dependecies, i did it like this
sfml-window.lib
while debug needs to be
sfml-window-d.lib
Thanks for the help!
### #2BaneTrapper
Posted 03 October 2012 - 08:23 AM
I' once had the exact same problem. The chance is that you are linking against release library when you are actually debugging.
http://en.sfml-dev.o...g60228#msg60228
I believe this would do it, but am going to work atm, Thx for help.
EDIT:: SFML WORKS!
Name is fixed, my error was:
while at Project properties, Linker, Input, Aditional dependecies, i did it like this
sfml-window.lib
while debug needs to be
sfml-window-d.lib
### #1BaneTrapper
Posted 02 October 2012 - 11:22 PM
I' once had the exact same problem. The chance is that you are linking against release library when you are actually debugging. |
proofpile-shard-0030-39 | {
"provenance": "003.jsonl.gz:40"
} | Openlumify
I worked for a “data discovery and analytics platform” company1 for a couple of years, and got to see first-hand the power of graph-based datastores for analysis, hypothesis-testing and pattern discovery. It truly was an example of intelligence amplification and allowed the analysts I worked with to work more effectively than ever.
Since that day I've had a desire to map things out, to build a kind of mega-mindmap2 to really understand some of the more complex and complicated situations — ancient and modern history, the middle east, US politics, the powerful people behind every conspiracy theory imaginable…
So I was quite happy to see that someone somewhere had started an opensource graph-based analysis suite of a similar sort. Unfortunately by the time I found it, it had become abandonware, moving from open- to (assumed) closed-source by the IP owner, who was then bought by another company, then again, renamed, ad absurdum infinitum. I grabbed the last opensource I could find and have started on the long journey to revive it.
So far I've updated the build (Maven! Eek) to work with Java 9+ and refactored everything to theorg.openlumify package. It builds, it runs … but I can't login.
If only there were more hours in the night or day!
Some plans
In no particular order …
• get it running
• build containers
• make it work in Kubernetes
• make sure it has a timeline and a map (at least!)
• pick a decent backend datastore and implement it
• setup a project website
• host it somewhere online e.g. for investigative journos
• karma!
1. I don't want to say their name, even after all these years, since I once tweeted the name — quite innocuously — and it eventually came back through the ‘peer network’ (no hierarchies, man). Probably not a huge deal, but I was super creeped out that they were actively watching for public mentions and performing sentiment analysis and someone with fewer than a hundred followers would actually catch any attention at all. ↩︎
2. Hey, I've loved mindmaps since the day I discovered them. Feels like I've been drawing and redrawing the same few maps — with alterations — for most of my life, anytime I was feeling lost or overwhelmed. Most recently I've recreated them in Simplemind, but there are notebooks upon notebooks exploring my plans, desires, perceived strengths and weaknesses. ↩︎ |
proofpile-shard-0030-40 | {
"provenance": "003.jsonl.gz:41"
} | Apunts de Matemàtiques per a l'accés a la UIB per a majors de 25 anys
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
#### 58 lines 2.1KB Raw Permalink Blame History
``````%% qrcode.ins %% Copyright 2014 by Anders O.F. Hendrickson %% %% This work may be distributed and/or modified under the %% conditions of the LaTeX Project Public License, either version 1.3 %% of this license or (at your option) any later version. %% The latest version of this license is in %% http://www.latex-project.org/lppl.txt %% and version 1.3 or later is part of all distributions of LaTeX %% version 2005/12/01 or later. %% %% This work has the LPPL maintenance status `maintained'. %% %% The Current Maintainer of this work is Anders O.F. Hendrickson. %% %% This work consists of the files qrcode.dtx and qrcode.ins %% and the derived file qrcode.sty. \input docstrip.tex \keepsilent \usedir{tex/latex/qrcode} \preamble This is a generated file. Copyright (C) 2014 by Anders Hendrickson This work may be distributed and/or modified under the conditions of the LaTeX Project Public License, either version 1.3 of this license or (at your option) any later version. The latest version of this license is in http://www.latex-project.org/lppl.txt and version 1.3 or later is part of all distributions of LaTeX version 2005/12/01 or later. \endpreamble \generate{\file{qrcode.sty}{\from{qrcode.dtx}{package}}} \obeyspaces \Msg{*************************************************************} \Msg{* *} \Msg{* To finish the installation you have to move the following *} \Msg{* file into a directory searched by TeX: *} \Msg{* *} \Msg{* qrcode.sty *} \Msg{* *} \Msg{* To produce the documentation run the file qrcode.dtx *} \Msg{* through LaTeX. *} \Msg{* *} \Msg{* Happy TeXing! *} \Msg{* *} \Msg{*************************************************************} \endbatchfile`````` |
proofpile-shard-0030-41 | {
"provenance": "003.jsonl.gz:42"
} | # Find an equation of the tangent line to the curve at the given point. y =...
###### Question:
Find an equation of the tangent line to the curve at the given point. y = x4 + 5x2 - x, (1,5) y = Show My Work (Requiredi 2
#### Similar Solved Questions
##### Question 4: Find the radius of convergence of the power seriesHence determine for what values of € this series is convergent:
Question 4: Find the radius of convergence of the power series Hence determine for what values of € this series is convergent:...
##### Clficial <Select one> H 6,04' Wne Dessinu (royg" L Ike points 013-3 W 8 WC: domi 0] 6 duissrn Am
clficial <Select one> H 6,04' Wne Dessinu (royg" L Ike points 013-3 W 8 WC: domi 0] 6 duissrn Am...
please use the information from questions 3, 4, and 5 to answer question 6 Question 3 0.3 pts As described above, the country of Irvineland has an income tax rate of 30% on the first $30,000 of taxable income, 40% on the next$30,000 in taxable income, and 50% on all taxable income above $60,000... 1 answer ##### A machine can be purchased for$50,000 and used for five years, yielding the following net...
A machine can be purchased for $50,000 and used for five years, yielding the following net incomes. In projecting net incomes, straight-line depreciation is applied, using a five-year life and a zero salvage value. Year 2 Net income Year 1$3,300 $8,300 Year 3$30,000 Year 4 $12,400 Year 5$33,200 C...
##### (3) An acidic solute, HA, has a Ka of 1.00 x 10-5 and a ko between...
(3) An acidic solute, HA, has a Ka of 1.00 x 10-5 and a ko between water and hexane of 3.00. (a) Calculate separately the extraction efficiencies if we extract a 50.00 mL sample of a 0.025 M aqueous solution of HA, buffered to a pH of 5.00 and 7.00, with 50.00 mL of hexane....
##### Convert the below 3 statements in English to c++ equivalent. a) The area of a rectangle...
Convert the below 3 statements in English to c++ equivalent. a) The area of a rectangle is given as area = width x height You are given the variables double A, wd, ht. b) The surface area of a cube is given as area = 2ab + 2ac + 2bc where a,b,c are the length, width and height. you are given the var...
##### 8) (10 points) Determine all values of in the interval [0, "J where the function f(x) = 3tan(4x) is discontinuous. (Remember that tan(x) is undefined at odd multiples of
8) (10 points) Determine all values of in the interval [0, "J where the function f(x) = 3tan(4x) is discontinuous. (Remember that tan(x) is undefined at odd multiples of...
##### If your peers (co-workers, friends) had to choose 3 words to describe you, what would they...
If your peers (co-workers, friends) had to choose 3 words to describe you, what would they choose? Explain why?...
##### A small 460-gram ball on the end of a thin, light rod is rotated in a...
A small 460-gram ball on the end of a thin, light rod is rotated in a horizontal circle of radius 1.1 m . Part A Calculate the moment of inertia of the ball about the center of the circle. Part B Calculate the torque needed to keep the ball rotating at constant angular velocity if air resistance e...
##### The solubility of potassium dichromate, KzCrzOzat 50.0 Cis 30.0g/100g HzO. Aquantity equal to 44.5 g of potassium dichromate is dissolved in a 150.0 g of water at 75*C. The solution is allowed to cool slowly to 50.0 *C_ precipitate is formed_ (a) How many grams of potassium dichromate are dissolved per 100 g HzO?(b) Is this solution at 50.0*C supersaturated, unsaturated,or saturated?
The solubility of potassium dichromate, KzCrzOzat 50.0 Cis 30.0g/100g HzO. Aquantity equal to 44.5 g of potassium dichromate is dissolved in a 150.0 g of water at 75*C. The solution is allowed to cool slowly to 50.0 *C_ precipitate is formed_ (a) How many grams of potassium dichromate are dissolved ...
##### PtutoueHaMn Deanon0.01600.04800.10200.10400.06200.06250Sum orsJuzred Denadons dl (observationsHm O tOJ:CO Fneuenn[ohaerianMedianVanancaKmar n MatitStandard Devlation (SD)clanterm (ntot the mean (SEMIStendard Ettor " mean (SEmISay Data TablSucralose (Sweet-n-Low) Data TableAhabl helcht (mmKefauHubbl aleht Imm]JnnhRenerlulotuDevlatlon0,0150Range
ptutoue HaMn Deanon 0.0160 0.0480 0.1020 0.1040 0.0620 0.06250 Sum orsJuzred Denadons dl (observations Hm O tOJ:CO Fneuenn [ohaerian Median Vananca Kmar n Matit Standard Devlation (SD) clanterm (ntot the mean (SEMI Stendard Ettor " mean (SEmI Say Data Tabl Sucralose (Sweet-n-Low) Data Table Ah...
##### Watch “What is an algorithm and why should you care?†video onKhan AcademyFrom the Guessing Game section of the Khan Academy lesson:Explain, in your own words, the technique that you used whendeciding what number toguess next.Why should you never need more than 9 guesses? (Can you think ofa mathematicalexplanation)?What is the max. number of guesses it would take if the numberswere from 1 to1,000,000? Can you provide a formula for the max. number of guessesfor numbers in the rangebetween 1 t
Watch “What is an algorithm and why should you care?†video on Khan Academy From the Guessing Game section of the Khan Academy lesson: Explain, in your own words, the technique that you used when deciding what number to guess next. Why should you never need more than 9 guesses? (Can you ...
##### State the excluded valuels) of *ttl *0
State the excluded valuels) of *ttl *0...
##### We were unable to transcribe this imageWe were unable to transcribe this imageSolving for step 4,...
We were unable to transcribe this imageWe were unable to transcribe this imageSolving for step 4, what would Bob Katz be willing to pay (approximately) for the stock? $78.52$80.58 $82.67$84.79 o e A...
##### Please show all work. A 2-ft circular shaft of inner diameter of 12 inch and outer...
please show all work. A 2-ft circular shaft of inner diameter of 12 inch and outer diameter of 14 inch is subjected to a torque of 10,000 kip.ft. The shear modulus of the shaft material is 10,000 ksi. The angle of twist in radian of the shaft is most nearly: A. 0.00166 B. 0,0166 C. 0.166 D. 1.66...
##### Approximate the integral J dx with an e1TOE Ifthe minimum number of subintervals need to the value ofk be ? less than 10-2 using Simphson Rule is 2, then whatcan of magnitude
approximate the integral J dx with an e1TOE Ifthe minimum number of subintervals need to the value ofk be ? less than 10-2 using Simphson Rule is 2, then whatcan of magnitude...
##### A small crack occurs at the base of a 15.0 -m-high dam. The effective crack area through which water leaves is $1.30 \times 10^{-3} \mathrm{m}^{2}$ (a) Ignoring viscous losses, what is the speed of water flowing through the crack? (b) How many cubic meters of water per second leave the dam?
A small crack occurs at the base of a 15.0 -m-high dam. The effective crack area through which water leaves is $1.30 \times 10^{-3} \mathrm{m}^{2}$ (a) Ignoring viscous losses, what is the speed of water flowing through the crack? (b) How many cubic meters of water per second leave the dam?...
##### DCO1CIOCI ^In-01o7 U1 nC DIOW thlat 1 FAD =0; then R iS a domain.3. Let R be a commutative ring with 1. Let Jac( R) be the intersection of all maximal ideals in R Prove that € € Jac(R) if and only if for every y € R we have that xy € Rx . (Hint: Use Proposition 4.19 of the latest version of the lecture notes) .
DC O1 CIOCI ^In-01o7 U1 nC DIOW thlat 1 FAD =0; then R iS a domain. 3. Let R be a commutative ring with 1. Let Jac( R) be the intersection of all maximal ideals in R Prove that € € Jac(R) if and only if for every y € R we have that xy € Rx . (Hint: Use Proposition 4.19 of th...
##### Multiple Choice What did we learn about COGS & Merchandise Inventory accounts? Question 3 options: We...
Multiple Choice What did we learn about COGS & Merchandise Inventory accounts? Question 3 options: We record COGS in a journal entry when we sell inventory, and that's for moving the cost of inventory from assets to income statement which will result in reducing the net income. In ...
##### Suppose that = outdoors club contains 12 boys and girls and suppose that nine campers are t0 be selected at random from the club without replacement Let X denote the number of boys that are selected and let denote the number of girls that are selected Find E(XFind Var(X - Y)
Suppose that = outdoors club contains 12 boys and girls and suppose that nine campers are t0 be selected at random from the club without replacement Let X denote the number of boys that are selected and let denote the number of girls that are selected Find E(X Find Var(X - Y)...
##### Thts = Ihe structure oll Iinal00 nalutal producl lound lavender oil Sorne ot Iho carbon Alams arc numnbered nonboncng orbitals are Ihere linalool? Lone paits are not shown, but aIl atorns are neulralroleruncemnanysia f orbitals andOH Linalool2 pI slar olbiaks and nonbonding orb las0 peul @btuls and nonbonding mbrals2str Orblakand nonbola IJ OrbilaeJmsa Dibi: Ad nonbondie crbla
Thts = Ihe structure oll Iinal00 nalutal producl lound lavender oil Sorne ot Iho carbon Alams arc numnbered nonboncng orbitals are Ihere linalool? Lone paits are not shown, but aIl atorns are neulral rolerunce mnany sia f orbitals and OH Linalool 2 pI slar olbiaks and nonbonding orb las 0 peul @btul...
##### Titan Mining Corporation has 8.4 million shares of common stock outstanding and 280,000 6 percent semiannual...
Titan Mining Corporation has 8.4 million shares of common stock outstanding and 280,000 6 percent semiannual bonds outstanding, par value $1,000 each. The common stock currently sells for$32 per share and has a beta of 1.2, and the bonds have 20 years to maturity and sell for 113 percent of par. Th...
##### Events A and B are independent and P(A) = .73 and P(B) = .27.Which of the following is correct?AnswerPointP(A or B or both) = 0.20P(A and B) = 1.00P(A or B or both) = 0.80P(A and B) = 1.20None of theabove
Events A and B are independent and P(A) = .73 and P(B) = .27. Which of the following is correct? Answer Point P(A or B or both) = 0.20 P(A and B) = 1.00 P(A or B or both) = 0.80 P(A and B) = 1.20 None of theabove...
##### 4T thensin(0) equalscos(0) equalstan(0) equalssec(0) equalsIf 0 _
4T then sin(0) equals cos(0) equals tan(0) equals sec(0) equals If 0 _...
##### Hi, please can anyone explain to me how they obtained the phasor angle, seems like I...
Hi, please can anyone explain to me how they obtained the phasor angle, seems like I missed the basic. Thanks Q #2 (50 pts) Phasors Given: V, = 100 L-900 Vnns a) Calculate phasors I,, I2, and I. b) Draw a phasor diagram for this circuit including all of the voltages and currents that are labeled. -j...
##### 11. The Rocky Mountain district sales manager of Rath Publishing Inc_, college textbook pub- lishing company; claims that the sales representatives make an average of 40 sales calls pe week on professors: Several reps say that this estimate is too low: To investigate;, random sample of 28 sales representatives reveals that the mean number of calls made last week was 42. The standard deviation of the sample is 2.1 calls. Using the .05 significance level can we conclude that the mean number of cal
11. The Rocky Mountain district sales manager of Rath Publishing Inc_, college textbook pub- lishing company; claims that the sales representatives make an average of 40 sales calls pe week on professors: Several reps say that this estimate is too low: To investigate;, random sample of 28 sales repr...
##### How is the nucleoid involved in helping the vibrio bacteria survive the harsh environment? The nucleoid is the same in all bacteria and they can all survive extreme conditions The nucleoid translates proteins essential for survival The nucleoid encodes essential genes and for some bacteria these are genes essential for surviving extreme cold The nucleoid is not essential for survival
How is the nucleoid involved in helping the vibrio bacteria survive the harsh environment? The nucleoid is the same in all bacteria and they can all survive extreme conditions The nucleoid translates proteins essential for survival The nucleoid encodes essential genes and for some bacteria these are...
##### Suppose a first-order reaction has a half life of 10.0 minutes. How much time is required...
Suppose a first-order reaction has a half life of 10.0 minutes. How much time is required for this reaction to be 75.0% complete. Express your answer in minutes....
##### The compound As2I4 is synthesized by reacting arsenic metal with arsenic triiodide (AsI3). If a solid...
The compound As2I4 is synthesized by reacting arsenic metal with arsenic triiodide (AsI3). If a solid cubic block of arsenic (density = 5.72 g/cm^3) that is 3.00 cm on edge is allowed to react with 1.01 x 10^24 molecules of arsenic tiiodide, how much As2I4 can be prepared? As + AsI3 ---- As2I4 If th...
##### Hemisphericab bowl of radius 23contains waterdepth ofshowin below.(a) Flnd the square of the radius of the surface of the water as function of h. 46h - h2(b) The water level drops at rate of 0.1 cm per hour At what rate the radius of the water decreasing when the depth 1446 cm/hrcm? Round your nswer t0 four decima places
hemisphericab bowl of radius 23 contains water depth of showin below. (a) Flnd the square of the radius of the surface of the water as function of h. 46h - h2 (b) The water level drops at rate of 0.1 cm per hour At what rate the radius of the water decreasing when the depth 1446 cm/hr cm? Round your...
##### @Bsuve all ansuena 01 S1aM5UF IIF a115 XJ4p 'qugns puDIf f(x) = 2x and g(x) = x ~ 4, what is the value of f(g(3))2
@B suve all ansuena 01 S1aM5UF IIF a115 XJ4p 'qugns puD If f(x) = 2x and g(x) = x ~ 4, what is the value of f(g(3))2...
##### The mean age when smokers first start is 14.5 years old with population standard deviation of 2.5 years_ A researcher thinks that smoking age has significantly changed since the invention of ENDS (Electronic Nicotine Delivery Systems) . A survey of smokers of this generation was done to see if the mean age has changed: The sample of 61 smokers found that their mean starting age was 13 years old. Do the data support the claim at the 1% significance level?What are the correct hypotheses?Ho:14.5yea
The mean age when smokers first start is 14.5 years old with population standard deviation of 2.5 years_ A researcher thinks that smoking age has significantly changed since the invention of ENDS (Electronic Nicotine Delivery Systems) . A survey of smokers of this generation was done to see if the m... |
proofpile-shard-0030-42 | {
"provenance": "003.jsonl.gz:43"
} | How to Detect if Motor is not Working?
sorry for my english, I am from Indonesia.
I try to make a (car) relay active - switch on, if the motor/fan is working normal, so when the fan is out, not working or broken, automaticly the relay is off.
But it is not working.
Edited Transistor NPN 2N3055
Motor 12 Vdc / 3 A
Car Relay Hela 12V
• Please explain how you believe your current circuit should work. – Ignacio Vazquez-Abrams Jan 7 '16 at 9:23
• What fault conditions on the motor are you trying to detect? – Icy Jan 7 '16 at 9:25
• As long as the motor works fine, the relay should still on. – Herbrata Moeljo Jan 7 '16 at 9:29
• 'not working' could be seized - high current; open circuit - zero current; disconnected load - low current + possibly lots of other conditions. – Icy Jan 7 '16 at 9:33
• so you require base current of 3.75mA (150/40) - and you have sized 270R resistor to give 10mA at 3A motor current. If this is rated current typical operating current will be much lower perhaps less than 1A - try using 1 or 2 Ohm resistor instead of 270R - the BD139 can take 1.5A of base current - and this will also give a reduced voltage drop on the motor at full torque. – Icy Jan 7 '16 at 10:01
Start by assuming the motor is drawing 3 A, and the relay coil is drawing 150 mA. Then the transistor collector current is also drawing 150 mA. Because the transistor should be acting as a switch, its base current should be in the range of 1/10 to 1/20 the collector current, or about 10 mA. Then the voltage across RSENSE should be $$V_{RSENSE} = 0.7 + I_b \times RBASE$$ For a 47 ohm RBASE, this works out to $$V_{RSENSE} = 0.7 + .01 \times 47 = 0.7 + .47 = 1.17\text{ volts}$$ and the power dissipated in RSENSE $$P = i VSENSE= 3\times1.17 = 3.5\text{ watts}$$ so you'll need RSENSE to be at least 3.5 watts, and a 5-watt resistor would be a reasonable choice. |
proofpile-shard-0030-43 | {
"provenance": "003.jsonl.gz:44"
} | # Find out the rules from examples, then solve it
Fill the blank rectangles with numbers 6 to 16
Find out the rules from examples
Hint:
No common divisors except 1
• In examples 2 and 3, do you realize that 9 is not a prime number? – Cordfield Aug 29 '17 at 9:15
• @Cordfield : Yes I realize it. – Jamal Senjaya Aug 29 '17 at 9:31
• Your “Hint” is pretty heavy; you might as well have come out and said explicitly what rule we were missing. You might want to write hints that are lightweight; pointing us in a direction without shoving the answer into our face. – Peregrine Rook Aug 30 '17 at 4:52
First, the board look like this.
AA 5 BB CC DD
EE FF GG
HH II JJ 17 KK
And I found three rules.
1. If there is a dot between two cells, two numbers should be consecutive. If there is no dot, two numbers should not be consecutive.
2. Greatest common divisor of two adjacent cells should be 1. In other words, two numbers should be coprime.
3. By rule 2, difference of two adjacent cells should be an odd number.
By rule 1,
BB, II, and JJ are obvious.
e1 5 6 o1 e2
o2 o3 o4
e3 15 16 17 e4
Now we have numbers from 7 to 14.
By rule 3,
e1 ~ e4 should be an even number, and o1 ~ o4 should be an odd number.
And the candidates are:
e1 = 8, 10, 12, 14
e2 = 8, 10, 12, 14
e3 = 8, 10, 12, 14
e4 = 8, 10, 12, 14
o1 = 7, 9, 11, 13
o2 = 7, 9, 11, 13
o3 = 7, 9, 11, 13
o4 = 7, 9, 11, 13
By rule 1, some numbers can be removed from the candidates.
e1 = 8, 10, 12, 14
e2 = 8, 10, 12, 14
e3 = 8, 10, 12
e4 = 8, 10, 12, 14
o1 = 9, 11, 13
o2 = 7, 9, 11, 13
o3 = 9, 11, 13
o4 = 7, 9, 11, 13
By rule 2, more numbers can be removed.
e1 cannot be 10, e3 cannot be 10 and 12, o1 cannot be 9, o3 cannot be 9
e1 = 8, 12, 14
e2 = 8, 10, 12, 14
e3 = 8
e4 = 8, 10, 12, 14
o1 = 11, 13
o2 = 7, 9, 11, 13
o3 = 11, 13
o4 = 7, 9, 11, 13
Since e3 has one candidate,
we can fill in e3. Then o2 should be 7 or 9.
e1 5 6 o1 e2
o2 o3 o4
8 15 16 17 e4
e1 = 12, 14
e2 = 10, 12, 14
e4 = 10, 12, 14
o1 = 11, 13
o2 = 7, 9
o3 = 11, 13
o4 = 7, 9, 11, 13
Let's assume:
o2 is 9.
e1 5 6 o1 e2
9 o3 o4
8 15 16 17 e4
Since 7 can only be filled in o4, it should be 7, and e4 should be 8, which is impossible.
Therefore,
o2 should be 7.
e1 5 6 o1 e2
7 o3 o4
8 15 16 17 e4
e1 = 12, 14
e2 = 10, 12, 14
e4 = 10, 12, 14
o1 = 11, 13
o3 = 11, 13
o4 = 9, 11, 13
In order,
e1 should be 12, o4 should be 9, e4 should be 10, e2 should be 14, o1 should be 11, and o3 should be 13.
12 5 6 11 14
7 13 9
8 15 16 17 10
And this is my final answer.
I think my explanation is messy and those formattings are horrible. Feel free to point errors or improve formatting of my answer.
• Rule 3 implies rule 2 – boboquack Aug 29 '17 at 9:24
• @boboquack Oh, you are right. For God's sake. – Otami Arimura Aug 29 '17 at 9:27
• @boboquack no, there are lots of coprime pairs with an even difference. – Kruga Aug 29 '17 at 14:42
• @Kruga however no even numbers can be next to each other, meaning that they have to be at the places they are, meaning the odd numbers must be in the remaining places and as a result all the differences are odd – boboquack Aug 29 '17 at 21:30
• @boboquack ah, I see what you mean. But it still depends on the shape of the grid and the preexisting numbers. – Kruga Aug 30 '17 at 7:25
the rule is simple:
dot between squares means those numbers should be consecutive. and if there is no dot, they cannot be consecutive and the difference has to be odd number.
and the numbers that you are supposed to fill is given as
$5$ to $17$
so firstly you can easily see that
5 to 6 and 17 to 16 then 16 to 15.
the rest becomes:
14 5 6 9 12
11 x 13 x 7
10 15 16 17 8
• 14 breaks your configuration – boboquack Aug 28 '17 at 9:12
• @boboquack forgot 14 :)) it is ojay now. – Oray Aug 28 '17 at 9:14
• Nop wrong answer. – Jamal Senjaya Aug 28 '17 at 9:15
• @JamalSenjaya it is valid for all given examples. :/ – Oray Aug 28 '17 at 9:16
• @JamalSenjaya check it again, change the rule after examining the examples a little deeper. – Oray Aug 28 '17 at 9:22
I found the same rules that Oray found, and I found the same solution:
$\array{14&~\mathbf5&~\mathit6&~9&12\\11&\textbf X&13&\textbf X&~7\\10&\mathit{15}&\mathit{16}&\mathbf{17}&~8}$
where $\mathbf{5}$ and $\mathbf{17}$ (bold) are the given numbers, and $\mathit{6}$, $\mathit{15}$, and $\mathit{16}$ (italic) are the trivially forced ones.
Unfortunately, I found seven other solutions following the same rules:
$\array{10&~\mathbf5&~\mathit6&~9&14\\~7&\textbf X&13&\textbf X&11\\~8&\mathit{15}&\mathit{16}&\mathbf{17}&12}$ $\quad\qquad\array{12&~\mathbf5&~\mathit6&11&14\\~7&\textbf X&13&\textbf X&~9\\~8&\mathit{15}&\mathit{16}&\mathbf{17}&10}$ $\quad\qquad\array{12&~\mathbf5&~\mathit6&~9&14\\~7&\textbf X&13&\textbf X&11\\~8&\mathit{15}&\mathit{16}&\mathbf{17}&10}$
$\array{12&~\mathbf5&~\mathit6&11&14\\~9&\textbf X&13&\textbf X&~7\\10&\mathit{15}&\mathit{16}&\mathbf{17}&~8}$ $\quad\qquad\array{14&~\mathbf5&~\mathit6&13&10\\11&\textbf X&~9&\textbf X&~7\\12&\mathit{15}&\mathit{16}&\mathbf{17}&~8}$
$\array{10&~\mathbf5&~\mathit6&~9&14\\13&\textbf X&11&\textbf X&~7\\12&\mathit{15}&\mathit{16}&\mathbf{17}&~8}$ $~~~\text{and}\,~~\array{10&~\mathbf5&~\mathit6&11&14\\13&\textbf X&~9&\textbf X&~7\\12&\mathit{15}&\mathit{16}&\mathbf{17}&~8}$
A bit more discussion: we know from the problem statement that we are required to use the integers 6 through 16.
The fact that a dot implies that the adjoining numbers are consecutive (differ by 1) forces the 6, 15, and 16, as previously stated, leaving 7, 8, 9, 10, 11, 12, 13, and 14 to be filled in. The fact that all pairs of adjacent numbers differ by an odd number (> 1) gives us this template:
$$\array{E_1&~\mathbf5&~\mathit6&O_1&E_2\\O_2&\textbf X&O_3&\textbf X&O_4\\E_3&\mathit{15}&\mathit{16}&\mathbf{17}&E_4}$$ where the $E$s are even numbers and the $O$s are odd numbers.
$O_1$ and $O_3$ can’t be 7, because they are adjacent to 6 (without a dot, so not consecutive), so $O_2$ or $O_4$ must be 7. The number below the 7 must be 8, because $O_2$ and $E_3$ are separated by a dot, and so are $O_4$ and $E_4$, and the number below 7 can’t be 6, because we already know where that is. That narrows it down to these possibilities:
$\array{E_1&~\mathbf5&~\mathit6&O_1&E_2\\~7&\textbf X&O_3&\textbf X&O_4\\~8&\mathit{15}&\mathit{16}&\mathbf{17}&E_4}$ $\quad\text{or}\quad\array{E_1&~\mathbf5&~\mathit6&O_1&E_2\\O_2&\textbf X&O_3&\textbf X&~7\\E_3&\mathit{15}&\mathit{16}&\mathbf{17}&~8}$
For my next step, I tried to solve for the other bottom pair of edge numbers i.e., $O_2~\&~E_3$ or $O_4~\&~E_4$, whichever one wasn’t 7 & 8. I got the following pairs: {9,10}, {11,10}, {11,12}, {13,12}, and {13,14}. Each of those, except for {13,14}, led to at least one solution, as shown above.
So I guess there’s a pattern that we haven’t spotted. :-(
Rules
Rule 1 - Even Odd Even Odd and so on (matrix style)
Rule 2 - Dots mean subtracted by one
Rule 3 - No dots, no common divisor except 1
Solution - Same as @Otami Arimura
12 5 6 11 14
7 x 13 x 9
8 15 16 17 10 |
proofpile-shard-0030-44 | {
"provenance": "003.jsonl.gz:45"
} | Chrysler will pay another $1 million to California as part of a parallel administrative settlement agreement with the California Air Resources Board (CARB), and will provide similar remedies for California-certified vehicles with the catalyst or OBD defects. The lawsuit is the result of a joint EPA-CARB investigation of Chrysler’s 1996 through 2001 Cherokees, Grand Cherokees, Wranglers, Dakota trucks, and Ram vans, wagons, and pickup trucks. The investigation disclosed that a significant percentage of the vehicles experience excessive deterioration or failure of the catalytic converter. The deterioration of the catalytic converters in the named models results from a design defect in the original converter installed on each of the vehicles. As a result of this design defect—in some of the identified Chrysler vehicles—the internal components of the converter move around excessively, causing the device’s ceramic core to break up. ### Comments My 1998 Jeep Grand Cherokee Laredo has a strange rattle in the catalytic converter. My dealer says there is no recall action. How can I find out if my car is part of the proposed Chrysler action? Thanks. Norita J. Halvorsen. My catalytic converter is really rattling, making a terrible noise. I was at the Daimler-Chrysler office in Hendersonville NC today and asked them about the warranty on my 1997 Jeep Grand Cherokee and they told me they hadn't received anything from the Company about the warranty on this vehicle. When will you start sending out these warranty notices. I want to get mine fixed if it is under warranty. Thanks, Doris Grant when my cat. converter went bad at a little over 78,000 miles the dealer fixed it.the dealer said that an 02sensor had to be removed to get to the cat.converter.it cost me 200.00 bucks.i think i was taken for a ride.also,my transmission blew up at the same time the converter did.my mechanic said the converter probably killed my transmission.should i seek compensation from the dealer for charging me for the sensor to fix the converter and what should i do about the transmission cost? I took my Grand Cherokee in for the recall. The converter was already rattling so they replaced the sensor and had to wait 2 weeks for them to get the converter in. Returned to have converter installed when i returned to pick up Jeep was told they had to carry the Jeep to a different place to have the converter cut off and weld the new one back one. The one they replaced it with is round doesn't fit in the same spot so they rerouted the pipes. After repairs were done engine is not running good real rough idel then noticed transmission fluid under the jeep when checked the transfer case is leaking. I checked online didn't find a converter that is round they all are shaped like a muffler with a flat top & bottom. Never received anything from anyone wanting to know how my service went. Is it possible this not a converter they put on my Jeep? My catalytic converter on my 1997 Grand Cherokee Laredo has been causing the "Check Engine" to light up. I cannot get the local Jeep dealership to check the problem; moreover, they've told me I do not have any claim, but they won't even check the vehicle. What do I do from here? I now have this issue: Bad catalytic converter which lead to the manifold cracking, found by dealership ($1800.00+ to repair). Took it to Dan Fast Muffler for an estimate. They have three more sitting waiting to be fixed with the same issues! Where is the JD and EPA with getting this settlement and recall issued? If I get this repaired will I be able to get a refund if the recall does finially appear?
My cat was doing that for a while and I just removed it because I did not have the money to fix it. Will I be able to take my jeep in for the recall anyway?
My '96 Grand CherokeeReceived the letter from mfgr to take car in for cat recall. Made appointment 2 weeks in advance-left car all day-dealer says part is on nation-wide back order & may take weeks or months to get new part. Meanwhile car started stalling at idle & running rough but showing no codes(classic symptoms of clogged up cat) I removed cat & sure enough, all the honeycomb material was dislodged and loose inside. I removed the material and re-installed cat-all is fine now, EXCEPT car is due for bi-annual Emission test soon.. Obviously it wont pass with cat empty,so I'm in Government Limbo 'til dealer gets new part ...
My '96 had same problems. Cat. started rattling so I had it removed. Due to "light wallet" at the time I had a straight pipe installed and ever since engine stalls and and back fires. Sometimes good for3-5 days, other times can't leave my driveway for 15 mins.! Today my muffler blew open from a huge backfire. Getting new one put on tomorrow and wondering now if I am part of this recall and if these "mysto" problems are from the cat. converter. As weel, will i blow another muffler without the converter installed. Mechanics I have spoken to say the pipe has nothing to do with any of this but I'm thinking otherwise.
Who do I contact for warranty?
The comments to this entry are closed. |
proofpile-shard-0030-45 | {
"provenance": "003.jsonl.gz:46"
} | Björn smedman in Probability 30 minutes
# Variational Coin Toss
Variational inference is all the rage these days, with new interesting papers coming out almost daily. But diving straight into Huszár (2017) or Chen et al (2017) can be a challenge, especially if you’re not familiar with the basic concepts and underlying math. Since it’s often easier to approach a new method by first applying it to a known problem I thought I’d walk you through variational inference applied to the classic “unfair coin” problem.
If you like thinking in code you can follow along in this Jupyter notebook.
## The Usual Bayesian Treatment of an Unfair Coin
Okay, so we got the usual problem statement: You’ve found a coin in a magician’s pocket. Since you found it in a magician’s pocket you’re not sure it’s a “fair coin”, i.e. one that has a 50% chance of landing heads up when you toss it; it could be a “magic” coin, one that comes up heads with probability $z$ instead.
So you toss the coin a few times and it comes up tails, heads, tails, tails and tails. What does that tell you about $z$? Let’s bring out the (hopefully) familiar Bayesian toolbox and find out.
### Choosing a Prior
First we need to place a prior probability distribution over $z$. Since we found the coin in a magician’s pocket we think that $z$ may very well be quite far from $0.5$. At the same time; there’s nothing obviously strange about the coin, so it seems improbable that $z$ would be very high or very low. We therefore choose $p(z) = \text{Beta}(\alpha = 3, \beta = 3)$ as our prior.
Figure 1. Prior probability distribution over $z$.
### Classic Bayesian Inference
Now that we have a prior over $z$ and we’ve tossed the coin a few times we’re ready to infer the posterior. Call the outcome of the coin tosses $\vect{x}$. Then according to Bayes theorem the posterior $p(z \given \vect{x})$ is
$$$$p(z \given \vect{x}) = \frac{p(\vect{x} \given z) p(z)}{p(\vect{x})},$$$$
where $p(\vect{x} \given z)$ is usually referred to as the “likelihood of $z$” and $p(\vect{x})$ is called the model “evidence”. $p(z)$ is of course just the prior.
With our choice of prior over $z$, and conditional probability of $\vect{x}$ given $z$, we could quite easily derive a closed form expression for the posterior. But since the subject here is variational inference we’ll instead see if we can approximate the posterior using variational methods.
## Variational Approximation of the Posterior
The basic idea behind variational inference is to turn the inference problem into a kind of search problem: find the distribution $q^\ast(z)$ that is the closest approximation of $p(z \given \vect{x})$. To do that we of course need some sort of definition of “closeness”. The classic one is the Kullback-Leibler divergence:
$$$$\KL{q(z)}{p(z \given \vect{x})} = \int q(z) \log \frac{q(z)}{p(z \given \vect{x})} dz$$$$
The Kullback-Leibler divergance is always positive or zero, and it’s only zero if $q(z) = p(z \given \vect{x})$ almost everywhere.
Let’s see what happens if we replace $p(z \given \vect{x})$ in Eq. 2 with the expression derived in Eq. 1:
$$$$\KL{q(z)}{p(z \given \vect{x})} = \int q(z) \log \frac{q(z) p(x)} {p(\vect{x} \given z) p(z)} dz$$$$
Since $\log ab = \log a + \log b$ and $\log \frac{1}{a} = -\log a$ we can rewrite that as
$$$$\KL{q(z)}{p(z \given \vect{x})} = \int q(z) \log \frac{q(z)}{p(z)} dz + \\ \int q(z) \log p(\vect{x}) dz - \int q(z) \log p(\vect{x} \given z) dz.$$$$
The first term on the right hand side is (by definition) the Kullback-Leibler divergence between the prior $p(z)$ and the variational posterior $q(z)$. The second term is just $\log p(\vect{x})$, since $\log p(\vect{x})$ is independent of $z$ and can be brought out of the integral, and because the integral $\int q(z) dz$ is $1$ by definition (of probability density function). The third term can be interpreted as the expectation of $\log p(\vect{x} \given z)$ over $q(z)$. We then get
$$$$\KL{q(z)}{p(z \given \vect{x})} = \KL{q(z)}{p(z)} + \\ \log p(\vect{x}) - \Expect{z \sim q(z)}{\log p(\vect{x} \given z)}.$$$$
Now all we have to do is vary $q(z)$ until $\KL{q(z)}{p(z \given \vect{x})}$ reaches its minimum, and we will have found our best approximation $q^\ast(z)$! Typically that’s done by choosing a parameterized family of probability distributions and then finding the optimal parameters with some sort of numerical optimization algorithm.
### Beta distribution as Variational Posterior
We start by assuming that $q(z)$ is a beta distribution, parameterized by $\alpha_q$ and $\beta_q$.
Figure 2. The beta distribution is a family of continuous probability distributions defined on the interval [0, 1], parametrized by two positive shape parameters $\alpha$ and $\beta$.
We can then derive expressions for each of the terms in Eq. 5. We start with the Kullback-Leibler divergence between the variational posterior $q(z)$ and the prior $p(z)$. Since both are beta distributions there is a closed form expression for their KL divergence
$$$$\KL{q(z)}{p(z)} = \log \frac{B(3, 3)}{B(\alpha_q, \beta_q)} + (\alpha_q - 3) \psi(\alpha_q) + \\ (\beta_q - 3) \psi(\beta_q) + (3 - \alpha_q + 3 - \beta_q) \psi(\alpha_q + \beta_q),$$$$
where $B$ is the beta function and $\psi$ is the digamma function.
Then the expectation of $\log p(\vect{x} \given z)$: First, we formulate $p(\vect{x} \given z)$ assuming that $x_i$ is $1$ if the $i$th coin toss came up heads and $0$ if it came up tails as
$$$$p(\vect{x} \given z) = \prod z^{x_i} (1 - z)^{1-x_i},$$$$
and the logaritm of that is then
$$$$\log p(\vect{x} \given z) = \sum ( x_i \log z + (1-x_i) \log (1 - z) ).$$$$
Let’s call the number of heads in $\vect{x}$ $n_h = \sum x_i$ and the number of tails $n_t = \sum (1 - x_i)$. Now, since $\mathbb{E}$ is a linear operator we can write the third and final term of Eq. 5 as
$$$$\Expect{z \sim q(z)}{\log p(\vect{x} \given z)} = n_h \Expect{z \sim q(z)}{\log z} + n_t \Expect{z \sim q(z)}{\log (1 - z)}.$$$$
Again, we’re lucky because there is a closed form expression for this expectation:
$$$$\Expect{z \sim q(z)}{\log p(\vect{x} \given z)} = n_h (\psi(\alpha_q) - \psi(\alpha_q + \beta_q)) + \\ n_t (\psi(\beta_q) - \psi(\alpha_q + \beta_q))$$$$
Now we’re ready to put it all together as
$$$$\KL{q(z)}{p(z \given \vect{x})} = \log \frac{B(3, 3)}{B(\alpha_q, \beta_q)} + (\alpha_q - 3 - n_h) \psi(\alpha_q) + \\ (\beta_q - 3 - n_t) \psi(\beta_q) + (3 - \alpha_q + 3 - \beta_q + n_h + n_t) \psi(\alpha_q + \beta_q) + \\ \log p(\vect{x}),$$$$
and go look for $\alpha^\ast_q$ and $\beta^\ast_q$ that minimize $\KL{q(z)}{p(z \given \vect{x})}$. In this very simple example there’s probably a closed form solution, but since we’re here to learn about the case when there’s not we’ll go right ahead and minimize the above expression numerically. The video below shows scipy.optimize.minimize() going to work on the problem. Note that $\log p(\vect{x})$ is independent of $\alpha_q$ and $\beta_q$ and can thus be left out of the optimization.
Figure 3. Process of finding our variational posterior $q^\ast(z)$ through numerical optimization of the closed form expression in Eq. 11.
### Numerical Approximation of Difficult Integrals
As you can see from Eq. 5 variational inference is about optimization over quite hairy integrals. One thing you’ll hear a lot in this context is “we approximate the integral through Monte Carlo sampling”. What that means is essentially that we make the following approximation:
$$$$\int p(x) f(x) dx = \Expect{x \sim p(x)}{f(x)} \approx \frac{1}{K} \sum_{i=0}^{K} \left[ f(x_i) \right]_{x_i \sim p(x)}$$$$
Expressed in words what we do is to draw $K$ i.i.d. samples $x_i$ from the probability distribution $p(x)$ and compute the value of $f(x_i)$ for each one. We then take the average of that and call it our Monte Carlo approximation. Simple!
In this case we were quite lucky that there was a closed form expression for $\KL{q(z)}{p(z)}$ (see Eq. 6). But let’s for a moment pretend that there wasn’t. If we then go back to Eq. 5 and apply Monte Carlo approximation we get
$$\begin{equation*} \KL{q(z)}{p(z \given \vect{x})} = \KL{q(z)}{p(z)} + \log p(\vect{x}) - \Expect{z \sim q(z)}{\log p(\vect{x} \given z)} = \\ \Expect{z \sim q(z)}{\log \frac{q(z)}{p(z)}} + \log p(\vect{x}) - \Expect{z \sim q(z)}{\log p(\vect{x} \given z)} \approx \\ \frac{1}{K} \sum_{i=0}^{K} \left[ \log \frac{q(z_i)}{p(z_i)} \right]_{z_i \sim q(z)} + \log p(\vect{x}) - \frac{1}{K} \sum_{i=0}^{K} \left[ \log p(\vect{x} \given z_i) \right]_{z_i \sim q(z)} = \\ \frac{1}{K} \sum_{i=0}^{K} \left[ \log q(z_i) - \log p(z_i) - \log p(\vect{x} \given z_i) \right]_{z_i \sim q(z)} + \log p(\vect{x}), \end{equation*}$$
where $q(z)$ is parameterized by $\alpha_q$ and $\beta_q$ (so we could write it as $q(z; \alpha_q, \beta_q)$ if we wanted to be very verbose in our notation). If we approximate $\KL{q(z)}{p(z \given \vect{x})}$ like this and then let scipy.optimize.minimize() go to work on finding optimal $\alpha^\ast_q$ and $\beta^\ast_q$ we get the video below. As you can see the approximation is no longer perfect, but still pretty good.
Figure 4. Process of finding our variational posterior $q^\ast(z)$ through numerical optimization of a Monte Carlo approximation of $\KL{q(z)}{p(z \given \vect{x})}$.
### Normal Distribution as Variational Posterior
Beta distributed priors and posteriors are a pretty natural choice when we’re talking about coin tosses, but most of the time we’re not. In many cases we have no real reason to choose one family of distributions over another, and then often end up with normal distributions - mostly because they are easy to work with.
There’s however nothing in the variational framework that requires the prior $p(z)$ and the variational posterior $q(z)$ to come from the same family. To drive that point home we’ll switch to a normal distribution for $q(z)$, parameterized with $\mu_q$ and $\sigma_q$, while leaving $p(z)$ as $\text{Beta}(3, 3)$. Eq. 5 still holds and we can still approximate it with Monte Carlo sampling as above:
$$\begin{equation*} \KL{q(z)}{p(z \given \vect{x})} = \KL{q(z)}{p(z)} + \log p(\vect{x}) - \Expect{z \sim q(z)}{\log p(\vect{x} \given z)} \approx \\ \frac{1}{K} \sum_{i=0}^{K} \left[ \log q(z_i) - \log p(z_i) - \log p(\vect{x} \given z_i) \right]_{z_i \sim q(z)} + \log p(\vect{x}) \end{equation*}$$
Here $q(z)$ is really a function of $\mu_q$ and $\sigma_q$ as well (i.e. it could be written $q(z; \mu_q, \sigma_q)$ if we were really verbose in our notation), and we can find their approximately optimal values through (derivative-free) numerical optimization as we have done previously. But let’s step it up a notch and instead bring in numerical optimization workhorse numero uno: stochastic gradient descent.
For SGD to work we need to be able to differentiate $\KL{q(z)}{p(z \given \vect{x})}$ with respect to the unknown parameters $\vect{\theta_q} = (\mu_q, \sigma_q)$:
$$\begin{equation*} \frac{\partial}{\partial \vect{\theta_q}}\KL{q(z)}{p(z \given \vect{x})} = \frac{\partial}{\partial \vect{\theta_q}} \KL{q(z)}{p(z)} + \frac{\partial}{\partial \vect{\theta_q}} \log p(\vect{x}) - \frac{\partial}{\partial \vect{\theta_q}} \Expect{z \sim q(z)}{\log p(\vect{x} \given z)} \end{equation*}$$
Of course, since $\log p(\vect{x})$ does not depend on $\vect{\theta_q}$ its derivative is zero. $\frac{\partial}{\partial \vect{\theta_q}} \KL{q(z)}{p(z)}$ can often be computed analytically, and often even automatically e.g. by theano.tensor.grad(). But in this case where the prior $p(z)$ and the variational posterior $q(z)$ are from different families of distributions we’ll have to resort to approximation, so we use the definition of Kullback-Leibler divergence and merge the first and third term into a single expectation:
$$\begin{equation*} \frac{\partial}{\partial \vect{\theta_q}}\KL{q(z)}{p(z \given \vect{x})} = \frac{\partial}{\partial \vect{\theta_q}} \Expect{z \sim q(z)}{ \log q(z) - \log p(z) - \log p(\vect{x} \given z)} \end{equation*}$$
Now remember that $q(z)$ is a normal distribution with mean $\mu_q$ and standard deviation $\sigma_q$. We can therefore express $z$ as a deterministic function of a gaussian noise variable $\epsilon$: $z = \mu_q + \sigma_q \epsilon$ where $\epsilon \sim \text{N}(0, 1)$. This allows us to take the expectation over $\epsilon \sim \text{N}(0, 1)$ instead:
$$\begin{equation*} \frac{\partial}{\partial \vect{\theta_q}}\KL{q(z)}{p(z \given \vect{x})} = \frac{\partial}{\partial \vect{\theta_q}} \Expect{\epsilon \sim \text{N}(0, 1)}{ \log q(\mu_q + \sigma_q \epsilon) - \\ \log p(\mu_q + \sigma_q \epsilon) - \log p(\vect{x} \given z = \mu_q + \sigma_q \epsilon)} \end{equation*}$$
That perhaps doesn’t look like a step forward, but since we are now taking the expectation over a distribution that does not depend on $\vect{\theta_q}$ we can safely exchange the order of the derivation and the expectation operators. This maneuver is what’s commonly referred to as the “reparametrization trick”.
After reparametrization we can approximate the gradient of the Kullback-Leibler divergence between our variational posterior and the true posterior with respect to the variational parameters $\vect{\theta_q}$ using Monte Carlo sampling:
$$\begin{equation*} \frac{\partial}{\partial \vect{\theta_q}}\KL{q(z)}{p(z \given \vect{x})} = \Expect{\epsilon \sim \text{N}(0, 1)}{ \frac{\partial}{\partial \vect{\theta_q}} \left( \log q(\mu_q + \sigma_q \epsilon) - \\ \log p(\mu_q + \sigma_q \epsilon) - \log p(\vect{x} \given z = \mu_q + \sigma_q \epsilon) \right)} \approx \\ \frac{1}{K} \sum_{i=0}^{K} \left[ \frac{\partial}{\partial \vect{\theta_q}} \left( \log q(\mu_q + \sigma_q \epsilon_i) - \log p(\mu_q + \sigma_q \epsilon_i) - \log p(\vect{x} \given z = \mu_q + \sigma_q \epsilon_i) \right) \right]_{\epsilon_i \sim \text{N}(0, 1)} \end{equation*}$$
The gradient there may look pretty ugly, but computing partial derivatives like that is what frameworks like Theano or TensorFlow do well. You just express your objective function as the average of $K$ samples from $\left[ \log q(\mu_q + \sigma_q \epsilon_i) - \log p(\mu_q + \sigma_q \epsilon_i) - \log p(\vect{x} \given z = \mu_q + \sigma_q \epsilon_i) \right]_{\epsilon_i \sim \text{N}(0, 1)}$ and call theano.tensor.grad() on that. Easy-peasy.
With $K = 10$ samples to approximate the gradient the search for $q^\ast(z)$ progresses as in the video below. As you can see there are two kinds of error in our approximation this time: as usual we’re not finding the exact optimal parameters $\mu^\ast_q$ and $\sigma^\ast_q$, but also - even if we did the variational posterior $q^\ast(z) = \text{N}(\mu^\ast_q, \sigma^\ast_q)$ would not perfectly match the true posterior $p(z \given \vect{x})$.
Figure 5. Process of finding our variational posterior $q^\ast(z) = \text{N}(\mu^\ast_q, \sigma^\ast_q)$ through stochastic gradient descent on $\KL{q(z)}{p(z \given \vect{x})}$.
## Evidence Lower Bound (ELBO)
Up until now we’ve been talking about minimizing $\KL{q(z)}{p(z \given \vect{x})}$, mostly because I feel that makes intuitive sense. But in the literature it’s much more common to talk about maximizing something called the evidence lower bound (ELBO).
Thankfully the difference is minimal. We start from Eq. 5 and derive an expression for (the logarithm of) the evidence $p(\vect{x})$:
$$$$\log p(\vect{x}) = \Expect{z \sim q(z)}{\log p(\vect{x} \given z)} - \KL{q(z)}{p(z)} + \\ \KL{q(z)}{p(z \given \vect{x})}$$$$
Now we observe that $\KL{q(z)}{p(z \given \vect{x})}$ must be positive or zero (because a Kullback-Leibler divergence always is). If we remove this term we thus get a lower bound on $\log p(\vect{x})$:
$$$$\log p(\vect{x}) \geq \Expect{z \sim q(z)}{\log p(\vect{x} \given z)} - \KL{q(z)}{p(z)} = \mathcal{L}(\vect{\theta_q})$$$$
This $\mathcal{L}(\vect{\theta_q})$ is what is often referred to as the evidence lower bound (ELBO). Maximizing $\mathcal{L}(\vect{\theta_q})$ will give us the same variational posterior as minimizing $\KL{q(z)}{p(z \given \vect{x})}$.
But phrasing variational inference as maximization of the ELBO admittedly has at least one conceptual advantage: it shows how close we are to maximum likelihood estimation. It seems inference in the Bayesian regime is a balancing act between best explaining the data and “keeping it simple”, by staying close to the prior. If we strike the second requirement our posteriors collapse onto the maximum likelihood estimate (MLE) and we are back in the Frequentist regime.
## Where to Next?
If you feel comfortable with the basics above I highly recommend Kingma and Welling’s classic paper “Autoencoding Variational Bayes”.
Any and all feedback is greatly appreciated. Ping me on Twitter or, if it’s more specific and/or you need more than 130 characters, post an issue on GitHub. |
proofpile-shard-0030-46 | {
"provenance": "003.jsonl.gz:47"
} | # Edward Sang's arithmetic texts
Edward Sang wrote Elementary Arithmetic (1856) and Higher Arithmetic (1857). These texts are particularly interesting given Sang's later work on constructing logarithm tables. We give below the Prefaces to these books.
1. Preface to 'Elementary Arithmetic' (1856)
The following Treatise on Elementary Arithmetic has been designed as the first of a continuous series of Treatises on those sciences which are usually comprehended under the somewhat indistinct name MATHEMATICS.
During a long experience as preceptor in the higher departments of mathematics, the Author of this work has observed that almost all the difficulties which the student encounters are traceable to an imperfect acquaintance with arithmetic. It seems as if this subject were never regarded as having in it anything intellectual. Arithmetic is considered as a kind of legerdemain, a talsamic contrivance, by means of which results are to be obtained in some occult manner, into the nature of which the student is forbidden to inquire; hence many a one, as he advances in life, finds himself compelled to resume the study of the principles of arithmetic, and discovers that all along he has been working in the dark. Now, truly, there is hardly any branch of human knowledge which affords more scope for intellectual effort, or presents a more invigorating field for mental exercise than the science of number. It has therefore been the Author's aim to prepare a text-book which should call mind, not memory, into exercise, and from which all mere dogmatism should be scrupulously excluded.
In arranging it, he has endeavoured to explain the reasons of the various operations by an examination of the nature of the questions which give rise to them, and in this manner has sought to prepare the student for understanding their applications to the higher branches of science. The gradual formation of systematic numeration has been traced, and the operations of palpable arithmetic have been taught in order to reflect a clearer light on the true nature of our figurate processes. The few lines that are devoted to the explanation of the ancient Greek and Arabic Notations may not be unacceptable to those who study the history and phenomena of the human mind. We are accustomed to designate the ordinary numerals as Arabic. The Arabs themselves call them Rakam Hindi in contradistinction to those described at page 23, which they call Rakam Arabi.
It is hoped that, independently of the arrangement of its parts, this treatise contains enough of new and original matter to prevent it from being regarded as an uncalled-for addition to a class of books already sufficiently numerous. The method of computing from the left hand, which has been practised and taught by the author for thirty years, is now published for the first time. Besides the merit of novelty, this method has the higher merit of great usefulness. A very slight acquaintance with it augments one's power over numbers in an unexpected degree, and the continued practice of it renders computation a pastime. In the ordinary mode of computing we have never occasion to add more than nine to nine, or to take the product of more than nine times nine, and hence the limit of rapidity is soon reached. But when we begin to work from the left hand, every operation adds to our previous experience, and we soon become familiar with large numbers, so much so that the rapidity of mental calculation comes far to exceed the swiftness of the pen.
The multiplication of one large number by another by help of a movable slip of paper, though not to be recommended in actual business, is interesting, and the ability to perform it enables us to follow the operations for shortening the multiplication and division of long decimals.
The subject of prime and composite numbers has been introduced as preparatory to the theory of fractions; and the doctrine of proportion has been reached by means of the method of continued fractions invented by Lord Brouncker. The immense power of this method, the almost unlimited range of its applications, as well as the beautiful simplicity of the idea from which it arose, recommend it to the close attention of every calculator. The doctrine of continued fractions is here divested of its technicalities, and presented in such a form as to be intelligible to beginners; the formation of the successive approximating fractions being deduced without the aid of artificial symbols, perhaps more clearly than with that aid.
Among the minor improvements which have been introduced, the mode of obtaining at once the continued product of several numbers, the plan for shortening division by a number of two or three places, and the arrangement of the work for finding the greatest common divisor of two numbers, may be mentioned; Fourier's Division Ordonnée is also new to the English reader.
It was a matter of anxious deliberation whether the answers to the questions should be printed, and ultimately it was resolved to give these answers in a separate key. The Author, however, most earnestly entreats the student to trust to his own thorough comprehension of the matter, and to the care with which he works. Let him carry the feeling with him, that if his result differ from that given in the key, the key is likely to be wrong: above all things, cultivate self-reliance. In very many cases the manner in which the result is worked out is of more importance than the mere obtaining of it; and in some cases, for obvious reasons, the answers ought not to be given at all.
The Author is in hopes of speedily laying before the public, in continuation, a treatise on the higher arithmetic, in which the doctrines of Powers, Roots, and Logarithms, are completely investigated.
Edinburgh, June 1856.
2. Preface to 'Higher Arithmetic' (1857)
In this Second Volume on Arithmetic an account is given of the doctrines of Powers, Roots, and Logarithms, so far as that can be well done without the aid of general symbols. The Treatise is intended not merely as a Text-Book on these subjects, but also as an introduction to Algebra: indeed, if we adopt the original meaning of the Arab words (ylim ul jibr, the science of powers), the present work forms the first, and not the least important chapter of that science.
To those who have only considered the subjects of direct, inverse, and fractional powers, and the cognate subject of Logarithms, in the light which the modern notation throws upon them, it may seem vain to attempt to explain these matters with no aid beyond that of our ordinary numeral notation; but an examination of the following pages may serve to show that the mind does not require the aid of artificial symbols to detect and appreciate even recondite properties of numbers; and the Author flatters himself that he has brought the leading properties of Logarithms completely within the bounds of arithmetic.
This has been accomplished by the help of a new method for extracting all roots, of which the previously well-known processes for extracting the square and cube roots are the two simplest cases. This method was given, by implication, in a small treatise "On the Solution of Algebraic Equations of all Orders, Edinburgh, 1829;" it is here simplified and adapted to ordinary arithmetic. By its means we obtain the root, and all the inferior powers of the root, with great rapidity; the simplicity of the arrangement being the better seen, the higher the order of the root which we extract.
In the actual construction of the first Decimal Logarithmic Tables, Briggs used the repeated extraction of the square root, until the results exceeded unit by fractions so small as to render the excesses sensibly proportional to the exponents. Had he known the method of extracting fifth roots, his labour would have been greatly lessened. The principle used by Briggs is, in essence, identic with that adopted by Dodson inl the construction of his Anti-Logarithmic Canon, and with that which is followed at page 119; the only difference is, that the ability to extract fifth roots has given us a much greater command of the subject than either Dodson or Briggs possessed.
The direct computation of the logarithm of a number, that is, in the language of modern algebra, the direct solution of the equation $a_{z} = n$, has not heretofore been obtained; for although the well-known formula
$z=\Large\frac{(n-1)-\frac 1 2(n-1)^2 +\frac 1 3(n-1)^3-\frac 1 4(n-1)^4+ \text{etc.}}{(a-1)-\frac 1 2(a-1)^2 +\frac 1 3(a-1)^3-\frac 1 4(a-1)^4+ \text{etc.}}$
be a symbolical solution, it is only susceptible of direct application when $a$ and $n$ differ from unit by small fractions. In common logarithms $a - 1$ has the value 9, and $z$ has to be computed indirectly through the intervention of other numbers.
The student of the Higher Algebra will, therefore, be somewhat surprised to find an exceedingly simple and rapid solution, obtained by a train of reasoning which requires only a clear perception of the nature of powers, and which is altogether independent of notation.
This is another to be added to the rapidly accumulating testimonies of the usefulness of Lord Brouncker's continued fractions; for although the algorithm and definition of these fractions have not been employed, the essential idea has been freely used.
These two new processes, viz. the extraction of all roots, and the direct solution of the exponential equation, have enabled the Author to place the whole subject in a clear light, and to complete the Theory of Practical Arithmetic without calling in the dangerous aids of indefinite symbols and arbitrary notation.
In order to prepare the student for following the reasoning to be afterwards used in algebraic investigations, and also for the purpose of fortifying his knowledge of what has already been gone over, a short notice has been added of various Numeration scales. The study of this part of the work may serve to free the mind from those prejudices which are apt to attend the use of a single system, and may lead it to form just and comprehensive views of arithmetic in general.
Edinburgh, March 1857.
Last Updated January 2021 |
proofpile-shard-0030-47 | {
"provenance": "003.jsonl.gz:48"
} | # Math Help - Trig periods
1. ## Trig periods
How do i figure out which period the function is in?
Im doing the problem 3 tan2 x-1=0
i got tan x= 1sqrt 3 i have to find a solution to x. How do i know tan is on the interval 0, pi and not 0, 2pi?
2. ## Re: Trig periods
The period of a periodic function $f(x)$ is the smallest positive real number $\alpha$ such that $f(x+\alpha)=f(x)$ for all $x$ in the domain of the function. In case of $\tan,$ $\tan(x+\pi)=\tan x$ for all real $x$ and no positive real number $\alpha$ smaller than $\pi$ satisfies $\tan(x+\alpha)=\tan x$ for all $x.$ (It's true that $\tan(x+2\pi)=\tan x$ for all $x$ but $2\pi$ is not the smallest such positive real number.)
3. ## Re: Trig periods
Originally Posted by noork85
How do i figure out which period the function is in?
Im doing the problem 3 tan2 x-1=0
i got tan x= 1sqrt 3 i have to find a solution to x. How do i know tan is on the interval 0, pi and not 0, 2pi?
$3\tan^2{x} - 1 = 0$
$\tan{x} = \pm \frac{1}{\sqrt{3}}$
since the solution interval is not given, then all possible solutions are ...
$x = \frac{\pi}{6} + k\pi ; k \in \mathbb{Z}$
$x = -\frac{\pi}{6} + k\pi ; k \in \mathbb{Z}$ |
proofpile-shard-0030-48 | {
"provenance": "003.jsonl.gz:49"
} | # Natural Resource Depletion and Depreciation of Related Plant Assets E10A. Mertz Company purchased...
###### Natural Resource Depletion and Depreciation of Related Plant Assets
E10A. Mertz Company purchased land containing an estimated 5 million tons of ore for a cost of $8,800,000. The land without the ore is estimated to be worth$500,000. During its first year of operation, the company mined and sold 750,000 tons of ore. Compute the depletion charge per ton. Compute the depletion expense that Mertz should record for the year.
## Related Questions in Managerial Accounting - Others
• ### Acct
(Solved) November 23, 2013
• ### Cornerstone Exercise 8-19 Economic Order Quantity Refer to the data for La Cucina Company above....
(Solved) November 13, 2015
is the total annual ordering cost of lava stone for a year under the EOQ policy? 4. What is the total annual carrying cost of lava stone per year under the EOQ policy? 5 . What is the total annual inventory-related cost for lava stone under the EOQ?
EOQ = Economic Order Quantity A =Annual Requirement O = Ordering Cost per order C = Carrying Cost per pound Requirement 1: EOQ = [2AO/C]1/2 =[2*8400 * 3 / 2]1/2 =158.745 EOQ = 159 pounds...
• ### Long Haul Transport is considering replacing an existing semi-trailer truck with a more modern...
October 15, 2018
in value of $10,000. The existing truck has 5 years of usable life remaining and can currently be sold for$62,000 net. The new long haul truck being considered will cost $210,000 plus$10,000 delivery from an interstate dealer. It will be depreciated over its useful life of 5 years having a... |
proofpile-shard-0030-49 | {
"provenance": "003.jsonl.gz:50"
} | # American Institute of Mathematical Sciences
January 1997, 3(1): 91-106. doi: 10.3934/dcds.1997.3.91
## On the global solvability of symmetric hyperbolic systems of Kirchhoff type
1 Dipartimento di Costruzioni, Istituto Universitario di Architettura, Tolentini, S. Croce 191 - 30135 Venezia, Italy
Received April 1996 Revised July 1996 Published October 1996
We shall prove here the global solvability for small initial data for symmetric hyperbolic systems with integro-differential coefficients. In this way, we will extend some results obtained in [5], [6], [8], [11] for the classic Kirchhoff equation and in [3] for regularly hyperbolic systems.
Citation: Renato Manfrin. On the global solvability of symmetric hyperbolic systems of Kirchhoff type. Discrete & Continuous Dynamical Systems, 1997, 3 (1) : 91-106. doi: 10.3934/dcds.1997.3.91
[1] Manil T. Mohan, Sivaguru S. Sritharan. New methods for local solvability of quasilinear symmetric hyperbolic systems. Evolution Equations & Control Theory, 2016, 5 (2) : 273-302. doi: 10.3934/eect.2016005 [2] Zhiqing Liu, Zhong Bo Fang. Global solvability and general decay of a transmission problem for kirchhoff-type wave equations with nonlinear damping and delay term. Communications on Pure & Applied Analysis, 2020, 19 (2) : 941-966. doi: 10.3934/cpaa.2020043 [3] Cyril Joel Batkam, João R. Santos Júnior. Schrödinger-Kirchhoff-Poisson type systems. Communications on Pure & Applied Analysis, 2016, 15 (2) : 429-444. doi: 10.3934/cpaa.2016.15.429 [4] Wenguo Shen. Unilateral global interval bifurcation for Kirchhoff type problems and its applications. Communications on Pure & Applied Analysis, 2018, 17 (1) : 21-37. doi: 10.3934/cpaa.2018002 [5] Marina Ghisi, Massimo Gobbino. Hyperbolic--parabolic singular perturbation for mildly degenerate Kirchhoff equations: Global-in-time error estimates. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1313-1332. doi: 10.3934/cpaa.2009.8.1313 [6] Irena Pawłow, Wojciech M. Zajączkowski. The global solvability of a sixth order Cahn-Hilliard type equation via the Bäcklund transformation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 859-880. doi: 10.3934/cpaa.2014.13.859 [7] Ken Shirakawa. Solvability for phase field systems of Penrose-Fife type associated with $p$-laplacian diffusions. Conference Publications, 2007, 2007 (Special) : 927-937. doi: 10.3934/proc.2007.2007.927 [8] Zheng Han, Daoyuan Fang. Almost global existence for the Klein-Gordon equation with the Kirchhoff-type nonlinearity. Communications on Pure & Applied Analysis, 2021, 20 (2) : 737-754. doi: 10.3934/cpaa.2020287 [9] Yongqiang Fu, Xiaoju Zhang. Global existence and asymptotic behavior of weak solutions for time-space fractional Kirchhoff-type diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021091 [10] Maoding Zhen, Binlin Zhang, Xiumei Han. A new approach to get solutions for Kirchhoff-type fractional Schrödinger systems involving critical exponents. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021115 [11] Hugo Beirão da Veiga, Francesca Crispo. On the global regularity for nonlinear systems of the $p$-Laplacian type. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1173-1191. doi: 10.3934/dcdss.2013.6.1173 [12] Michael Ruzhansky, Jens Wirth. Dispersive type estimates for fourier integrals and applications to hyperbolic systems. Conference Publications, 2011, 2011 (Special) : 1263-1270. doi: 10.3934/proc.2011.2011.1263 [13] Tohru Nakamura, Shinya Nishibata, Naoto Usami. Convergence rate of solutions towards the stationary solutions to symmetric hyperbolic-parabolic systems in half space. Kinetic & Related Models, 2018, 11 (4) : 757-793. doi: 10.3934/krm.2018031 [14] Jijiang Sun, Chun-Lei Tang. Resonance problems for Kirchhoff type equations. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 2139-2154. doi: 10.3934/dcds.2013.33.2139 [15] Yijing Sun, Yuxin Tan. Kirchhoff type equations with strong singularities. Communications on Pure & Applied Analysis, 2019, 18 (1) : 181-193. doi: 10.3934/cpaa.2019010 [16] Moritz Kassmann, Tadele Mengesha, James Scott. Solvability of nonlocal systems related to peridynamics. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1303-1332. doi: 10.3934/cpaa.2019063 [17] Tatsien Li (Daqian Li). Global exact boundary controllability for first order quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1419-1432. doi: 10.3934/dcdsb.2010.14.1419 [18] Fuqin Sun, Mingxin Wang. Non-existence of global solutions for nonlinear strongly damped hyperbolic systems. Discrete & Continuous Dynamical Systems, 2005, 12 (5) : 949-958. doi: 10.3934/dcds.2005.12.949 [19] Tatsien Li, Libin Wang. Global exact shock reconstruction for quasilinear hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems, 2006, 15 (2) : 597-609. doi: 10.3934/dcds.2006.15.597 [20] Norimichi Hirano, Wieslaw Krawcewicz, Haibo Ruan. Existence of nonstationary periodic solutions for $\Gamma$-symmetric Lotka-Volterra type systems. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 709-735. doi: 10.3934/dcds.2011.30.709
2019 Impact Factor: 1.338
## Metrics
• PDF downloads (75)
• HTML views (0)
• Cited by (6)
## Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top] |
proofpile-shard-0030-50 | {
"provenance": "003.jsonl.gz:51"
} | # How do you differentiate f(x)= (xe^x+4)^3 using the chain rule.?
May 27, 2018
= $3 {e}^{x} \left(x + 1\right) {\left(x {e}^{x} + 4\right)}^{2}$ or some version of that
#### Explanation:
This is the chain rule.
${\left(x {e}^{x} + 4\right)}^{3}$
dy/dx = $3 {\left(x {e}^{x} + 4\right)}^{2}$ x (derivative of the inside)
= $3 {\left(x {e}^{x} + 4\right)}^{2}$ x ($x$ x ${e}^{x}$ + ${e}^{x}$)
= $3 {\left(x {e}^{x} + 4\right)}^{2}$ x ($x {e}^{x}$ +${e}^{x}$)
= $3 {e}^{x} \left(x + 1\right) {\left(x {e}^{x} + 4\right)}^{2}$ |
proofpile-shard-0030-51 | {
"provenance": "003.jsonl.gz:52"
} | # The Large Time-Frequency Analysis Toolbox
- All your frame are belong to us -
You can have a look at the Sourceforge download page to see all available versions, or just download the latest one by clicking on the button.
## Installation
To install, simply unpack the package. The toolbox is contained in the 'ltfat' directory and in all the subdirectories. To use the toolbox, start Octave/Matlab, change to the 'ltfat' directory and run the
ltfatstart
command. This will set up the necessary paths and perform the necessary initializations.
If you have downloaded a binary package for Windows 7 or MacOS, compiled mex files for 64 bit Matlab are included in the package. Otherwise, you can compile the mex interfaces yourself.
To compile the mex/oct interfaces for faster execution of the toolbox, type the command:
ltfatmex
Further installation instructions can be found in the files INSTALL-Matlab and INSTALL-Octave. The currently supported platforms are:
• Octave 3.4 and higher on Linux. Tested on 32/64 bit Intel/AMD platforms.
• Octave 3.4 and higher on Windows XP, Vista, Windows 7 (Windows 8 has not been tested).
• Octave 3.4 and higher on Mac OS X. Has not been tested, but should work out of the box.
• Matlab 2009b and later on 32/64 bit Windows (mexext=mexw32 and mexext=mexw64).
• Matlab 2009b and later on 32/64 bit Linux (mexext=mexglx and mexext=mexa64).
• Matlab 2009b and later on Mac OS X (mexext=mexmaci and mexext=mexmaci64). |
proofpile-shard-0030-52 | {
"provenance": "003.jsonl.gz:53"
} | # Do supermassive black holes at galactic centers and the galaxis containing them spin in the same axis?
If the galactic mass is rotating around a central supermassive black hole, should their spin axis not be the same, just as we would obtain for the rotation of a star and its planets ?
• One difference is, in a planet-ring system or a star-planets system, the mass and gravity of the central body completely dominates the system (99%+ of the mass). For a black hole-galaxy this is not the case, even with the baryonic matter, but even more so if we include the dark matter. May 27 at 2:37
• Eg, our SMBH, Sagittarius A*, only contains roughly 4 millionths of the total Milky Way mass. May 27 at 7:23
• Just because they may have started out with the same spin alignment does not mean they they still have the same alignment billions of years later. Specifically the supermassive black hole presumably has been capturing and absorbing other stars which can alter it's spin alignment. May 27 at 12:03
The nearest supermassive black hole has a spin that isn't aligned with the angular momentum vector of its galaxy. So the answer must be no.
The black hole at the centre of our Galaxy has a spin axis that is probably inclined by less than 50 degrees to our line of sight Akiyama et al. 2022, as revealed by the recent Event Horizon Telescope results. Given that the Sun is close to the Galactic plane but 25,000 light years from the Galactic centre, this means the black hole spin is nowhere near parallel to the angular momentum of the disk of the Milky Way.
Whether the spin is parallel or not to that of its host galaxy would depend very much on how "old" the black hole is, whether it formed from the merger of two or more black holes and the angular momentum of any material that is being fed to it.
For example, in our Galaxy, the black hole only dominates the dynamics of material within a few parsecs of the centre. It is likely being fed material (and angular momentum) from the winds of a cluster of massive stars that orbit it. These stars are not arranged in a disk and the structures around the central black hole have a variety of orientations, none of which seem to align closely with the Galactic plane (Murchikova et al. 2019).
In other galaxies with supermassive black holes, it has often been observed that the relative orientations of the galaxy disk, the accretion disk around the black hole and the jets emerging from the central regions are essentially random (Schmitt & Kinney 2002). On the other hand, if most of the black hole mass is built up by accretion of gas fed to it from gas circulating in a similar way to the galactic plane, then alignment would be expected.
The observational evidence isn't very decisive. The orientations of the spins of supermassive black holes in other galaxies have not been measured. Indirect evidence comes from the measurements of spin magnitudes via X-ray observations of accreting gas. Reynolds (2021) reviews this evidence and concludes that the low spin rates seen in many of the more massive black holes ($$>3\times 10^7$$ solar masses) argues in favour of multiple mergers and incoherent accretion. These would favour a fairly random level of alignment between galaxy and black hole spins.
There is no particular reason they need to. A planet does not necessarily have its axis aligned with the solar system or the galaxy. A star does not necessarily have its axis aligned with its stellar system or the galaxy. Our own star's axis is about 7 degrees out of alignment with the plane of the ecliptic.
If a black hole were aligned with the galaxy, and a large mass (say a star) impacted the BH at some weird angle, the result would not still be aligned. There is no particular reason that accretion has to proceed symmetrically. So the evolution of the BH could pass through a phase where it is aligned, but is unlikely to stay there.
Probably it won't be massively far off, because the average of accretion is probably going to be round-about aligned with the galaxy. But it is unlikely to be perfectly aligned. |
proofpile-shard-0030-53 | {
"provenance": "003.jsonl.gz:54"
} | Section 2-2 : Linear Equations. In economics the demand function relates the price per unit of an item to the number of units that consumers will buy at that price. SYSTEMS OF LINEAR EQUATIONS3 1.1. We tried to explain the trick of solving word problems for equations with two variables with an example. Example 1. Figure $$\PageIndex{6}$$ 1. On the other hand, equations are just statements that make two things equal, like x = y or 52x = 100. Graph the piecewise function: Gimme a Hint = - Show Answer. Typically, there are three types of answers possible, as shown in Figure $$\PageIndex{6}$$. For example, the relation between feet and inches is always 12 inches/foot. We can do more than giving an example of a linear equation: we can give the expression of every possible linear function. answers for a variable (since we may be dealing with quadratics or higher degree polynomials), and we need to plug in answers to get the other variable. A function may also be transformed using a reflection, stretch, or compression. 2 CHAPTER 1. The zero from solving the linear function above graphically must match solving the same function algebraically. y=3x+2 y-4x=9 These are examples of linear equations which is a first degree algebraic expression with one, two or more variables equated to a constant. y = 2x + 5 with a = 2 and b = 5, y = -3x + 2 with a = -3 and b = 2, and y = 4x + - 1 with a = 4 and b = -1 are other examples of linear equations. your constraint equations are: x >= 0 y >= 0 x + y = 8 2x + y = 10 to graph these equations, solve for y in those equations that have y in them and then graph the equality portion of those equations. In linear equation, each term is either a constant or the product of a constant and a single variable. Example 3. It also shows you how to check your answer three different ways: algebraically, graphically, and using the concept of equivalence.The following table is a partial lists of typical equations. Problems 7 1.4. A function is said to be linear if the dipendent and the indipendent variable grow with constant ratio. Linear equations can be added together, multiplied or divided. Show Answer. Is the following graph a linear function? 1. Example 4. Graphically, we can think of the solution to the system as the points of intersections between the linear function \color{red}x + y = 1 and quadratic function … 3. Finding the Zeros of Linear Functions Algebraically. Is the ... Is the following graph a linear function? Linear equations in one variable are equations where the variable has an exponent of 1, which is typically not shown (it is understood). Find the solution n to the equation n + 2 = 6, Problem 2. This topic covers: - Intercepts of linear equations/functions - Slope of linear equations/functions - Slope-intercept, point-slope, & standard forms - Graphing linear equations/functions - Writing linear equations/functions - Interpreting linear equations/functions - Linear equations/functions word problems Real-world situations including two or more linear functions may be modeled with a system of linear equations. We can use either Substitution or Elimination , depending on what’s easier. If solving a linear equation leads to a true statement like 0 = 0, then the equation is an identity and the solution set consists of all real numbers, R. C(x) = fixed cost + variable cost. The demand, q, is considered to be the independent variable, while the price, p, is considered to be the dependent variable. Exercises 4 1.3. Solve for x in the second equation. These equations are defined for lines in the coordinate system. Get help with your Linear equations homework. Start Solution. MATRICES AND LINEAR EQUATIONS 1 Chapter 1. Another option for graphing is to use transformations of the identity function $f\left(x\right)=x$ . An example would be something like $$12x = x – 5$$. Pretty much any time your hear "_____ per _____" or "_____ for every _____" there is a linear equation involved as long as that rate stays constant. Linear Equations and Functions. An objective function is a linear function in two or more variables that is to be optimized (maximized or minimized). Most linear equations that you will encounter are conditional and have one solution. Answers to Odd-Numbered Exercises14 Chapter 3. Part 1. The main difference is that we’ll usually end up getting two (or more!) Solve this system of equations by using substitution. P(x) is a profit function. Use the linear equation to calculate matching "y" values, so we get (x,y) points as answers; An example will help: Example: Solve these two equations: y = x 2 - 5x + 7 ; y = 2x + 1 . Answers to Odd-Numbered Exercises8 Chapter 2. LINEAR EQUATIONS - Solve for x in the following equations. Cannot multiply or divide each other. Cannot have exponents (or powers) For example, x squared or x 2 . The general representation of the straight-line equation is y=mx+b, where m is the slope of the line and b is the y-intercept.. Problem 5. Example 3. Show Answer. Make both equations into "y=" format: They are both in "y=" format, so go straight to next step . For equations with two variables is given below: -y … Section 2-2: linear equations can added... Substitution with two variables with an example of a constant and a single variable stretch!, videos, activities and worksheets to help ACT students review linear equations are statements... A single variable squared or x 2 equation for a straight line is called a linear above! Function is said to be optimized ( maximized or minimized ) - Tutorial you! We can use either Substitution or Elimination, depending on what ’ s easier two! To work through solving linear equations - Solve for x in the following examples... Variable cost equations Worksheet and Activity answers with pictures @ 2 CHAPTER 1 first order = -2 x! With Solutions the trick of solving word problems for equations with fractions and decimals equations - for! Be classified as a “ function ” that gets... real World linear linear function examples with answers Consider the following.. Is y=mx+b, where m is the y-intercept subtraction, multiplication, and division equation: we can either., videos, activities and worksheets to help ACT students review linear equations: Solutions using,!, multiplied or divided the following equations multiplication, and division transformations the. In two or more! functions can only have one output for each input function and so must certain... Variables with an example not have exponents ( or more linear functions anytime! Real-World situations including two or more variables that is to be optimized ( maximized or minimized ) indipendent variable with! Points the two lines have in common that you will encounter are conditional and have one solution straight-line... Follow certain rules to be linear if the dipendent and the second equation in the two. That gets... real World linear equations questions that are of the following two examples: example #:. A straight line gets... real World linear equations: problems with Solutions a system of linear equations questions are... Next step a Hint = Show Answer trick of solving word problems for equations with and! ( maximized or minimized ): -y … Section 2-2: linear equations Worksheet and answers. Equation, each term is either a constant change rate = x – 5\ ) that! } \ ) transformations of the straight-line equation is y=mx+b, where m is the slope of the line b. Linear functions may be modeled with a system of linear equations Worksheet and linear function examples with answers! Equation, each term is either a constant or the product of constant. Easy for you to understand the process of solving equations of various.... Correct Answer in the following is a straight line is called a linear equation, each term is either constant... = 1.2 x - 7 equations can be added together, multiplied divided. Transformed using a reflection, stretch, or compression x 2 only one! Stretch, or right \ [ 4x - 7\left ( { 2 - x } \right ) = cost... An example of a linear function is a straight line is called a linear is. Have in common function algebraically the indipendent variable grow with constant ratio 2. y= '' format: They are both in y= '' format They! A Hint = - Show Answer and the indipendent variable grow with ratio. With two variables is given below: -y … Section 2-2: linear equations in y= '' format They! Real-World situations including two or more linear functions may be modeled with a system of equations. Have in common the selection of x and the x- and y-intercepts of the straight-line equation a! Solving a system of linear equations an objective function is a straight is! Both equations into y= '' format, so go straight to next step 52x = 100 is a!: a linear function is a linear function in two or more! than! The following example of linear equations ) for example, x squared or 2... Solutions, videos, activities and worksheets to help ACT students review linear equations numerous! Called a linear equation with two variables with an example of a constant change.... Exponents ( or powers ) for example, the relation between feet and inches is always inches/foot! Various forms a “ function ” that you will encounter are conditional and one! Access the answers to hundreds of linear equation: we can use either Substitution or Elimination, depending on ’! Possible, as shown in Figure \ ( 12x = x – 5\ ) may be modeled with system... ’ ll usually end up getting two ( or more! with Solutions can! And so must follow certain rules to be optimized ( maximized or minimized ) = or! X squared or x 2 situations including two or more linear functions anytime... Functions can only have one output for each input something like \ ( \PageIndex { 6 } \ ) with! Illustrated by the selection of x and the x- and y-intercepts of the linear function in two or!... Things equal, like x = y or 52x = 100 illustrated by selection! Or word problems on linear equations 52x = 100 - c = x... And so must follow certain rules to be linear if the dipendent and the equation! We need to clear out the parenthesis on the left side and then simplify the side. To hundreds of linear equations - Solve for x in the following is a small charge that gets real... @ 2 CHAPTER 1 hand, equations are those equations that you will encounter are conditional and one! Example, x squared or x 2 be optimized ( maximized or ). And then simplify the left side defined for lines in the following example other hand, equations numerous. Two or more linear functions happen anytime you have a constant or the product of a linear equation we... In common access the answers to hundreds of linear equation is an algebraic equation the answers to of! Inches is always 12 inches/foot will encounter are conditional and have one output for each input = Show.... With constant ratio thinking of a number inches is always 12 inches/foot n + 2 6. Life examples or word problems on linear equations are numerous or minimized ) and worksheets to help ACT review! For example, functions can only have one solution ( or powers ) example. Substitution or Elimination, depending on what ’ s easier Hint = Show. Two ( or more variables that is to use transformations of the straight-line is. Explained in a way that 's illustrated by the selection of x and the indipendent variable with. Problems for equations with fractions and decimals, Problem 2 y or 52x = 100 or!. Feet and inches is always 12 inches/foot more than giving an example the process of solving equations of forms... For you to understand squared or x 2 just statements that make two things equal like... I am thinking of a constant or the product of a constant and a single variable y 1.2... Of function and so must follow certain rules to be linear if the dipendent and the indipendent variable grow constant! \Pageindex { 6 } \ ) a number with an example of a number equation: 4x + =..., multiplied or divided [ latex ] f\left ( x\right ) =x [ ]... 'S easy for you to understand to use transformations of the straight-line equation is y=mx+b where... Me a Hint = Show Answer Real-world situations including two or more variables is. Can give the expression of every possible linear function in two or!... Sections illustrates the process of solving equations of various forms various forms correct. Using a reflection, stretch, or right ) equations: problems with Solutions be (. Of every possible linear function above graphically must match solving the same function algebraically with a system linear. Either a constant change rate function above graphically must match solving the same function algebraically optimized ( maximized minimized..., subtraction, multiplication, and division correct Answer in the following two examples: example # 1 I. + c/2 2, like x = y or 52x = 100: Solutions Substitution. Every possible linear function above graphically must match solving the linear equation, each term is either a and! \ [ 4x - 7\left ( { 2 - x } \right ) = fixed cost variable. Real life examples or word problems for equations with two variables with an example would be like! Maximized or minimized ) by a shift up, down, left, or compression World linear equations can added. Said to be optimized ( maximized or minimized ) ( \PageIndex { }... Are of the linear function above graphically must match solving the linear equation: mathematical! Things equal, like x = y or 52x = 100 slopes the. Solve systems using Substitution,... that 's illustrated by the selection of x and the and! Example of a number then simplify the left side line is called a linear equation we. C ( x ) = selling price ( number of items sold profit... The correct Answer in the following two examples: example # 1: I thinking! Ll usually end up getting two ( or more variables that is to use transformations of the line and is! Equal, like x = y or 52x = 100 answers to hundreds of linear equations on linear equations the..., or compression giving an example will encounter are conditional and have one output each! |
proofpile-shard-0030-54 | {
"provenance": "003.jsonl.gz:55"
} | # How do velocity and acceleration differ?
Mar 30, 2018
See below:
#### Explanation:
Common calculus problems involve displacement-time functions,
$d \left(t\right)$. For the sake of the argument let's use a quadratic to describe our displacement function.
$d \left(t\right) = {t}^{2} - 10 t + 25$
Velocity is the rate of change of displacement- the derivative of a $d \left(t\right)$ function yields a velocity function.
$d ' \left(t\right) = v \left(t\right) = 2 t - 10$
Acceleration is the rate of change of velocity- the derivative of a $v \left(t\right)$ function or the second derivative of the $d \left(t\right)$ function yields an acceleration function.
$d ' ' \left(t\right) = v ' \left(t\right) = a \left(t\right) = 2$
Hopefully, that makes their distinction clearer. |
proofpile-shard-0030-55 | {
"provenance": "003.jsonl.gz:56"
} | SE of fit versus SE of prediction
I would like to get the standard error on a prediction. Using R glm, I can get the SE of the fit for a specific prediction:
mod <- glm(y~wa_WSI, data=mydata, family=gaussian(link="identity"))
predict.glm(mod,newdata=newdata, type="response", se.fit=T)
But when I compare the predictions with the actual values, this number seems way too small. I found a formula for "standard error of the estimate" which is $\sqrt{s/(n-p)}$ where $s$ is the sum of the squared residuals, $n$ is the number of data points, and $p$ is the number of terms in the regression. This gives me a much larger result, but is not for a single prediction.
My question is, is the SE formula above the formula I should use and is there some way to get it from the value R gives me for se.fit so that it is specific for a particular prediction?
• The se.fit that predict.glm produces is a standard error for the mean prediction. For some GLMs it's meaningful to talk about a prediction interval (e.g. for the normal and Gamma), and a standard error for a future observation, but even in the cases where it makes sense, the problem - while easy for the normal - is difficult in the general case. You can do (for example) an asymptotic simulation -- simulate from $\hat\eta-\eta$ and then from $(Y|\eta-\hat\eta)$. – Glen_b -Reinstate Monica Feb 14 '14 at 23:04
It is hard to answer without knowing more about what mod is. That is why we suggest a reproducible example.
If mod is a glm fit with a 'gaussian' family (the default) then it is just a linear model and you can use predict.lm instead which has the interval argument that can be set to "prediction" to compute prediction intervals.
If mod is a glm fit with a non-Gaussian family then the concept of a standard error of prediction may not even make sense (what is the prediction interval when the predictions are all TRUE/FALSE?).
If you can give more detail (a reproducible example and a clear statement of what you want) then we will have a better chance of giving a useful answer.
• I am so happy you pointed this simple thing out to me!! Yes my model was actually linear so I changed it to use lm and I have the CIs. I am still a little confused about what the se.fit actually is.. – John Feb 14 '14 at 19:59
I would like to throw in a comment for non-normal distributions and non-identity link functions. se.fit=T yields standard errors of the prediction, i.e. a measure of uncertainty for the predicted value. This prediction, by one of the Central Value Theorems, can be assumed to be normally distributed at the link scale, and hence its standard error can be given as the standard deviation of a normal distribution.
When using type="response", the prediction is back-transformed with the anti-link function (e.g. plogis for the logit-link). Using type="response"and se.fit=T yields non-sensical values, as it only returns one set of standard errors at the response scale. As the link-function is non-linear, the symmetric errors at the link scale must be asymmetric at the response scale. Thus, we can choose type="response" or se.fit=T, but not both when using non-identity link functions. (I don't understand why predict.glm has not been programmed to throw an error in this case.) |
proofpile-shard-0030-56 | {
"provenance": "003.jsonl.gz:57"
} | A body is moving towards north with initial velocity of 13m/s. Its is subjected to a retardation of 2 m/s2 towards south.The distance travelled in 7th sec is ??
Joshi sir comment
displacement of any particular sec. = u+at-(a/2)
here direction of motion is opposite to the direction of acceleration so formula will be u-at+(a/2)
displacement in 7th sec = 0
now we know that direction of motion will become south after 6.5 sec. so distance for last half sec = ut+(1/2)at= 1/4
same will be the distance for the first half sec of 7th. so total distance for 7th sec = 1/4 + 1/4 = 1/2 |
proofpile-shard-0030-57 | {
"provenance": "003.jsonl.gz:58"
} | +34 973 224 921 [email protected]
Select Page
It is safe because we … vapply() is a variant of sapply() that allows you to describe what the output should be, but there are no corresponding variants for tapply(), apply(), or Map(). Using vapply() Function In R. It is very similar to sapply() function. If we are using data in a vector, we need to use lapply, sapply, or vapply instead. The function is called vapply(), and it has the following syntax: vapply(X, FUN, FUN.VALUE, ..., USE.NAMES = TRUE) Over the elements inside X, the function FUN is applied. In addition, vapply() may perform faster than sapply() for large datasets. The apply() functions form the basis of more complex combinations and helps to perform operations with very few lines of code. Google Ads. In this post we’ll cover the vapply function in R. vapply is generally lesser known than the more popular sapply, lapply, and apply functions. Example 1: Compute Standard Deviation in R. Before we can start with … The basic syntax for the apply() function is as follows: Before you get your hands dirty with the third and last apply function that you'll learn about in this intermediate R course, let's take a look at its syntax. In this post we'll cover the vapply function in R. vapply is generally lesser known than the more popular sapply, lapply, and apply functions. Google Ads. Section 2.2 introduces you to the distinction between names and values, and discusses how <-creates a binding, or reference, between a name and a value.. March 9, 2015 Johnny. You can compute an estimate from the GLM output, but it's not maximum likelihood. However, it is very useful when you know what data type you’re expecting to apply a function to as it helps to prevent silent errors. Here’s the good news: R has another looping system that’s very powerful, that’s at least as fast as for loops (and sometimes much faster), and — most important of all — that doesn’t have the side effects of a for loop. R swirl Post navigation. Argumento 1: matriz, lista o … Both sapply() and lapply() consider every value in the vector to be an element on which they can apply a function. More specifically, the family is made up of the apply(), lapply() , sapply(), vapply(), mapply(), rapply(), and tapply() functions. swirl – R Programming – Lesson 11 – vapply and tapply. R apply Functions. apply() function applies a function to margins of an array or matrix. First, let’s go over the basic apply function. sapply(x, f, simplify = FALSE, USE.NAMES = FALSE) is the same as lapply(x, f). future.apply 1.0.0 - Apply Function to Elements in Parallel using Futures - is on CRAN. You might think of vapply() as being ‘safer’ than sapply(), since it requires you to specify the format of the output in advance, instead of just allowing R to ‘guess’ what you wanted. Base R has two apply functions that can return atomic vectors: sapply() and vapply(). What is sapply() function in R? Apply functions in R. Iterative control structures (loops like for, while, repeat, etc.) It is a dimension preserving variant of “sapply” and “lapply”. The usual advice is to use vector operations and apply() and its relatives. There are so many different apply functions because they are meant to operate on different types of data. - Class: meta: Course: R Programming: Lesson: vapply and tapply: Author: Nick Carchedi: Type: Standard: Organization: JHU Biostat: Version: 2.2.11 - Class: text Output: " In the last lesson, you learned about the two most fundamental members of R's *apply family of functions: lapply() and sapply(). mapply: Apply a Function to Multiple List or Vector Arguments Description Usage Arguments Details Value See Also Examples Description. In this post, we will see the R lapply() function. Section 2.3 describes when R makes a copy: whenever you modify a vector, you’re almost certainly creating a new, modified vector. Funciones apply, lapply, sapply, tapply, mapply y vapply en R. por Diego Calvo | Sep 20, 2016 | R | 5 Comentarios. The apply functions that this chapter will address are apply, lapply, sapply, vapply, tapply, and mapply. ; Data Mining with R: Go from Beginner to Advanced Learn to use R … apply apply can be used to apply a function to a matrix. With this milestone release, all* base R apply functions now have corresponding futurized implementations. lapply returns a list of the same length as X, each element of which is the result of applying FUN to the corresponding element of X. sapply is a user-friendly version and wrapper of lapply by default returning a vector, matrix or, if simplify = "array", an array if appropriate, by applying simplify2array(). In the following R tutorial, I’ll show in three examples how to use the sd function in R.. Let’s dive in! You’ll learn how to use tracemem() to figure out when a copy actually occurs. Apply, TApply, LApply, Vapply, Ftable, xtab and aggregate functions are very important for data transformation. This tutorial explains the differences between the built-in R functions apply(), sapply(), lapply(), and tapply() along with examples of when and how to use each function.. apply() Use the apply() function when you want to apply a function to the rows or columns of a matrix or data frame.. R tapply, lapply, sapply, apply, mapply functions usage. This family contains seven functions, all ending with apply. By Thoralf Mildenberger (ZHAW) Everybody who knows a bit about R knows that in general loops are said to be evil and should be avoided, both for efficiency reasons and code readability, although one could argue about both. These are basic data processing functions. La función apply nos permite aplicar una función a una matriz, lista o vector que se le pase cómo parámetro. Any doubts in R Matrix Function till now? Datasets for apply family tutorial For understanding the apply functions in R we use,the data from 1974 Motor Trend US magazine which comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models). However, at large scale data processing usage of these loops can consume more time and space. This is an introductory post about using apply, sapply and lapply, best suited for people relatively new to R or unfamiliar with these functions. Converting your sapply() expressions in your own R scripts to vapply() expressions is therefore a good practice (and also a breeze!). #### Instructions *Convert all the* sapply() *expressions on the right to their* vapply() *counterparts. The first argument of most base functionals is a vector, but the first argument in Map() is a function. For example, let’s create a sample dataset: data <- matrix(c(1:10, 21:30), nrow = 5, ncol = 4) data [,1] […] Recent Comments. In this article, I will demonstrate how to use the apply family of functions in R. They are extremely helpful, as you will see. Arguments are recycled if necessary. I'm writing an R notebook to document my findings. I have: Similar functions include lapply(), sapply(), mapply() and tapply().These functions are more efficient than loops when handling data in batch. Previous Post swirl – R Programming – Lesson 10 – lapply and sapply Next Post swirl – R Programming – Lesson 12 – Looking At Data. The apply() Family. Some of the observations have '0' in these fields, which is invalid data. However, it is very useful when you know what data type you're expecting to apply a function to as it helps to prevent silent errors. In this post we’ll cover the vapply function in R. vapply is generally lesser known than the more popular sapply, lapply, and apply functions.However, it is very useful when you know what data type you’re expecting to apply a function to as it helps to prevent silent errors. I've got a dataset (named data) that has fields latitude and longitude. R: Complete Data Analysis Solutions Learn by doing - solve real-world data analysis problems using the most popular R packages; The Comprehensive Statistics and Data Science with R Course Learn how to use R for data science tasks, all about R data structures, functions and visualizations, and statistics. 2 The apply function. The apply() family pertains to the R base package and is populated with functions to manipulate slices of data from matrices, arrays, lists and dataframes in a repetitive way. vapply(x, fun, fun.value, …, use.names = true) simplification sapply: only simplify when X has length >0 and return values from all elements of X are of the same length You can use the help section to get a description of this function. These functions allow crossing the data in a number of ways and avoid explicit use of loop constructs. Useful Functions in R: apply, lapply, and sapply Introduction Introduction Get to know any function in R Get to know any function in R Get to know any function in R I recommend that you avoid sapply() because it tries to simplify the result, so it can return a list, a vector, or a matrix. R lapply Actually, this system consists of a complete family of related functions, known as the apply family. Please comment below. Their results should … $\begingroup$ If there is a fixed shape parameter for the Gamma, it does not affect the estimate of $\mu$, and hence not the coefficient vector either. mapply is a multivariate version of sapply.mapply applies FUN to the first elements of each ... argument, the second elements, the third elements, and so on. Outline. However, it is fast and safe to use as compared to sapply() function. There is a part 2 coming that will look at density plots with ggplot, but first I thought I would go on a tangent to give some examples of the apply family, as they come up a lot working with R. Many functions in R work in a vectorized way, so there’s often no need to use this. Got compute? This makes it easier than ever before to parallelize your existing apply(), lapply(), mapply(), … code - just prepend future_ to an apply call that takes a long time to complete. The two functions work basically the same — the only difference is that lapply() always returns a list with the result, whereas sapply() tries to simplify the final object if possible.. allow repetition of instructions for several numbers of times. Useful Functions in R: apply, lapply, and sapply Useful Functions in R: apply, lapply, and sapply Maria van Schaijik November 9, 2015 1/23. Definition of sd: The sd R function computes the standard deviation of a numeric input vector.. Usage Apply. This makes it difficult to program with, and it should be avoided in non-interactive settings. I've never been very skilled with R and am coming back after an absence so I'm re-learning a lot. Use the help section to get a description of this function to figure out when a copy occurs... That can return atomic vectors: sapply ( ) function because they are meant to on! Base functionals is a vector, we will see the R lapply ( and. – vapply and tapply first, let ’ s go over the vapply in r! Swirl – R Programming – Lesson 11 – vapply and tapply description of function. To apply a function to Elements in Parallel using Futures - is CRAN. F ) known as the apply functions because they are meant to operate on different types of.... ( ) got a dataset ( named data ) that has fields and!: sapply ( ) function sapply ” and “ lapply ” now have corresponding implementations! Before we can start with … got compute - apply function to Elements Parallel. In Parallel using Futures - is on CRAN of ways and avoid explicit use loop. Ll learn how to use vector operations and apply ( ) is a dimension preserving variant of “ ”. Let ’ s go over the basic apply function can use the help section to get a of... Of data 1: compute standard deviation in R. Before we can start with got!, but the first argument of most base functionals is a dimension preserving variant of “ sapply ” and lapply... Way, so there ’ s often no need to use this in these,. Should be avoided in non-interactive settings tracemem ( ) to figure out when a copy actually.. A description of this function known as the apply functions that this chapter will are. We need to use lapply, sapply, apply, tapply, lapply vapply in r sapply or. Operations and apply ( ) for large datasets can consume more time and.! - is on CRAN post, we will see the R lapply ( ) function function a... In addition, vapply, Ftable, xtab and aggregate functions are very important for data.... Vapply ( ) and its relatives … got compute functions are very important for data transformation vectors: (... Fields, which is invalid data – Lesson 11 – vapply and tapply over basic. Way, so there ’ s often no need to use lapply, sapply, vapply ( ) the! Compute standard deviation in R. it is very vapply in r to sapply ( ) for large datasets deviation R.. You can compute an estimate from the GLM output, but the argument... And mapply got a dataset ( named data ) that has fields latitude longitude! Of times we will see the R lapply R tapply, lapply, sapply, apply,,. And its relatives ) function section to get a description of this function but it not. Repetition of instructions for several numbers of times use tracemem ( ) and relatives. For several numbers of times FALSE ) is a dimension preserving variant of “ sapply ” and “ ”., it is very similar to sapply ( ) and vapply ( ) function le pase cómo parámetro got! … got compute to figure out when a copy actually occurs … got compute ) and (! In Parallel using Futures - is on CRAN document my findings, this system of. Are very important for data transformation the standard deviation of a numeric vector. R Programming – Lesson 11 – vapply and tapply for large datasets will address are,... A number of ways and avoid explicit use of loop constructs ’ ll how! Of related functions, all * base R has two apply functions that this will... At large scale data processing usage of these loops can consume more time and space R apply in. I have: the apply functions now have corresponding futurized implementations allow crossing data... Functions are very important for data transformation and tapply to operate on different types of data to use compared! Work in a vectorized way, so there ’ s go over the basic apply function margins... Functions usage get a description of this function to margins of an array or matrix ( loops like for while!: sapply ( ) may perform faster than sapply ( ) function in R. Before we can with..., f ) fields latitude and longitude, let ’ s often no need to use (... Has fields latitude and longitude simplify = FALSE ) is the same as lapply ( x, f simplify. The sd R function computes the standard deviation in R. Iterative control structures ( loops like for,,! Input vector large scale data processing usage of these loops can consume more time and.... Is to use vector operations and apply ( ) may perform faster than sapply ( ) to figure when... To get a description of this function Programming – Lesson 11 – vapply and tapply 1.0.0 apply. Using data in a vectorized way, so there ’ s go the. Functions because they are meant to operate on different types of data this chapter will address are apply, functions! In these fields, which is invalid data are using data in a vectorized way, there! Sapply ” and “ lapply ” this milestone release, all * base R has two apply functions now corresponding., tapply, and mapply Lesson 11 – vapply and tapply can return atomic vectors: sapply x! Perform faster than sapply ( ) and vapply ( ) is a function to Elements in Parallel Futures!, tapply, lapply, vapply, tapply, and it should be avoided in non-interactive settings compute standard in! Example 1: compute standard deviation in R. Before we can start …... If we are using data in a vectorized way, so there s. But the first argument in Map ( ) function Ftable, xtab and aggregate functions are very important for transformation! Of ways and avoid explicit use of loop constructs lista o vector que vapply in r le cómo! That this chapter will address are apply, tapply, and mapply compute estimate!, lapply, sapply, vapply, tapply, and mapply, so there ’ s no... The first argument in Map ( ) and its relatives as compared to sapply ( ) to out... To a matrix perform faster than sapply ( ) function in R. Before we start. ) and vapply ( ) and vapply ( ) function need to use lapply, sapply, apply tapply... Is to use tracemem ( ) allow crossing the data in a vector, but the first of... To program with, and mapply to use as compared to sapply ( ) function in,., lapply, sapply, or vapply instead help section to get a description of this function use the section. |
proofpile-shard-0030-58 | {
"provenance": "003.jsonl.gz:59"
} | SERVING THE QUANTITATIVE FINANCE COMMUNITY
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: How long does it take to solve a jigsaw?
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: How long does it take to solve a jigsaw?
Posts: 23951
Joined: September 20th, 2002, 8:30 pm
### Re: How long does it take to solve a jigsaw?
Our town has a puzzle maker that specializes in puzzles with whimsy pieces such as these:
Paul
Topic Author
Posts: 10093
Joined: July 20th, 2001, 3:28 pm
### Re: How long does it take to solve a jigsaw?
Liberty Puzzles? I always think that calling things "Liberty This" and "Freedom That" as Americans are wont to do is virtually communist. It's the sort of thing the Russians and Chinese do.
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: How long does it take to solve a jigsaw?
Cool, does he have Escher tilings?
Posts: 23951
Joined: September 20th, 2002, 8:30 pm
### Re: How long does it take to solve a jigsaw?
There are no Eschers that I know of but there is a Pieter Bruegel (one of my favorite artists) puzzle of "Children's Games" in which many of the whimsey pieces are children playing which makes it hard to spot the painted bits of children playing.
Posts: 23951
Joined: September 20th, 2002, 8:30 pm
### Re: How long does it take to solve a jigsaw?
Liberty Puzzles? I always think that calling things "Liberty This" and "Freedom That" as Americans are wont to do is virtually communist. It's the sort of thing the Russians and Chinese do.
Yes, {Liberty, Freedom, Patriot} are magic incantations that if repeated often enough become true or at least they seems that way.
But aren't all countries forged from a combination of iron will and ironic slogans?
Paul
Topic Author
Posts: 10093
Joined: July 20th, 2001, 3:28 pm
### Re: How long does it take to solve a jigsaw?
And we used to use Imperial a lot! The default should be to name things after people, ideally me...something I practice!
Paul
Topic Author
Posts: 10093
Joined: July 20th, 2001, 3:28 pm
### Re: How long does it take to solve a jigsaw?
There are no Eschers that I know of but there is a Pieter Bruegel (one of my favorite artists) puzzle of "Children's Games" in which many of the whimsey pieces are children playing which makes it hard to spot the painted bits of children playing.
At those prices you aren't going to give up too soon!
Posts: 23951
Joined: September 20th, 2002, 8:30 pm
### Re: How long does it take to solve a jigsaw?
There are no Eschers that I know of but there is a Pieter Bruegel (one of my favorite artists) puzzle of "Children's Games" in which many of the whimsey pieces are children playing which makes it hard to spot the painted bits of children playing.
At those prices you aren't going to give up too soon!
Indeed! Yet they are really nice puzzles made of laser-cut 1/4" wood.
And they are far cheaper on a $/hour basis than theatre tickets or Michelin-starred restaurants. outrun Posts: 4573 Joined: April 29th, 2016, 1:40 pm ### Re: How long does it take to solve a jigsaw?$125 laser cutter, your break even point is at 3/4 puzzles !
http://m.ebay.com/itm/281938936220
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: How long does it take to solve a jigsaw?
Here is a thought:
One possible puzzle is little rectangular tiles of various sizes that only fit in the square in one way (and its mirrors). This is probably equivalent to "bin packing" which is an NP-complete problem. Another classic NP-complete problem is 3SAT which is about finding boolean values for variables (x1=true, x2=true, x3=false) such that a sequence of clauses involving those variables are all true. This can likely be translated into puzzle shapes that have to match.
All these NP-complete problems are equivalently hard with no known polynomial time algorithm (only exponential). There will always be special cases that require exponential time'and If you can solve any NP-complete problem in polynomial time then you can solve *all* of then in polynomial time.
So IMO jigsaw is NP-complete and you won't be able to find a polynomial time algorithm.
Posts: 23951
Joined: September 20th, 2002, 8:30 pm
### Re: How long does it take to solve a jigsaw?
Standard jigsaws only become NP-complete if the solver cannot reject a prospective solution with bounded M<<N pieces. That is, the solver faces a non-zero chance of placing N-1 pieces in what seems like a valid configuration only to find that the last piece fails to fit.
For most jigsaw puzzles, however, the piece shape and color patterns across the boundary are so unique that it's virtually impossible to incorrectly put even two pieces together.
That said, I have seen puzzles with identically-shaped innie & outie tabs (and large expanses of uniform color) where one can get trapped and have to undo a set of pieces that seemed to go together.
Last edited by Traden4Alpha on January 11th, 2017, 10:33 pm, edited 1 time in total.
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: How long does it take to solve a jigsaw?
Yes, good insight, the common puzzles are incrementally solvable and way to easy, there is an opportunity here!
this fractal puzzle looks difficult, very confusing.
Posts: 23951
Joined: September 20th, 2002, 8:30 pm
### Re: How long does it take to solve a jigsaw?
Yes, good insight, the common puzzles are incrementally solvable and way to easy, there is an opportunity here!
this fractal puzzle looks difficult, very confusing.
Love that fractal puzzle!
One of the ancient versions of the Macintosh OS (maybe System 8. something?) came with a simple digital jigsaw puzzle that let one pick N. These days, with multi-finger touchscreen UIs and data logging systems, one could create an even better digital jigsaw puzzle system and gather empirical data on solver performance as a function of N. |
proofpile-shard-0030-59 | {
"provenance": "003.jsonl.gz:60"
} | # zbMATH — the first resource for mathematics
Multipliers of Hankel transformable generalized functions. (English) Zbl 0801.46047
Summary: Let $${\mathcal H}_ \mu$$ be the Zemanian space of Hankel transformable functions, and let $${\mathcal H}_ \mu'$$ be its dual space. In this paper $${\mathcal H}_ \mu$$ is shown to be nuclear, hence Schwartz, Montel and reflexive. The space $${\mathcal O}$$, also introduced by Zemanian, is completely characterized as the set of multipliers of $${\mathcal H}_ \mu$$ and of $${\mathcal H}_ \mu'$$. Certain topologies are considered on $${\mathcal O}$$, and continuity properties of the multiplication operation with respect to those topologies are discussed.
##### MSC:
46F12 Integral transforms in distribution spaces 46F10 Operations with distributions and generalized functions
Full Text: |
proofpile-shard-0030-60 | {
"provenance": "003.jsonl.gz:61"
} | # Proving two gcd's equal
I'm having problems with an exercise from Apostol's Introduction to Analytic Number Theory.
Given $x$ and $y$, let $m=ax+by$, $n=cx+dy$, where $ad-bc= \pm 1$. Prove that $(m,n)=(x,y)$.
I've tried to give a proof, but I suspect it's wrong (or at least not very good). I would be very thankful for any hints/help/advice!
My proof:
We observe that since $ad-bc= \pm 1$, $ad=bc \pm 1$, and $(ad,bc)=1$. Now, $(a,b) \mid m$ and $(c,d) \mid n$, but
$$(a,b) = ((d,1)a,(c,1)b)=(ad,a,bc,b)=(ad,bc,a,b)=((ad,bc),a,b)=(1,a,b)=1.$$
Similarly, we determine $(c,d)=1$. So, $1=(a,b) \mid m$ and $1=(c,d) > \mid n$. But $(x,y)$ also divide $m$ and $n$. Since $(x,y) \geq (a,b)=(c,d)=1$, this implies that $(x,y)=m,n$. Hence $(m,n)=((x,y),(x,y))=(x,y)$.
-
You seem to work only with gcd:s of $a,b,c,d$. The claim was about $x,y,m,n$! I'm afraid I cannot follow your thinking in the last two lines. Try first a special case like $a=2,d=b=c=1$ in which case you are to prove that $(2x+y,x+y)=(x,y)$ holds for all integers $x,y$. Alternatively you can try to directly follow the hint of my answer, but I realized after posting that you may have problems at a different spot. – Jyrki Lahtonen Jul 19 '11 at 10:51
I second Jyrki's comment: the last two sentences of your proof don't make any sense to me. – anon Jul 19 '11 at 10:55
Here is a proof. Call $z=(x,y)$ and $p=(m,n)$. The expressions of $m$ and $n$ as integer linear combinations of $x$ and $y$ show that $z$ divides $m$ and $n$ hence $z$ divides $p$ by definition of the gcd. On the other hand, $\pm x=dm-bn$ and $\pm y=cm-an$ hence the same argument used "backwards" shows that $p$ divides $\pm x$ and $\pm y$, which implies that $p$ divides $z$, end of proof.
-
Thanks, I got it now! – Carolus Jul 19 '11 at 11:45
Hint: Any common factor of $x$ and $y$ clearly divides both $m$ and $n$, because if $f\mid x$ and $f\mid y$, then also $f\mid ax+by$ et cetera.
The task at hand is to prove the reverse fact: any common factor of $m$ and $n$ also divides $x$ and $y$. One way of seeing this follows from the matrix equation $$\left(\begin{array}{c}m\\n\end{array}\right)= \left(\begin{array}{cc}a&b\\c&d\end{array}\right)\left(\begin{array}{c}x\\y\end{array}\right).$$
Does anything about the given condition on $ad-bc$ help you with the inverse of the above $2\times2$ matrix?
-
First note that $(x,y)$ divides both $m$ and $n$, hence $(x,y)|(a,b)$. So it suffices to show $(a,b)|(x,y)$
Consider the matrix $T=\left(\begin{array}{cc}a & b \\ c & d\end{array}\right)$. By assumption $\det T=ad-bc=\pm 1$. Hence its inverse satisfies
$T^{-1}=\pm\left(\begin{array}{cc}d & -b \\ -c & a\end{array}\right)$.
Now by definition of $m$ and $n$ we have
$T\left(\begin{array}{c}x\\ y \end{array}\right)=\left(\begin{array}{c}m\\ n \end{array}\right)$.
Hence
$\left(\begin{array}{c}x\\ y \end{array}\right)=T^{-1}\left(\begin{array}{c}m\\ n \end{array}\right)=\left(\begin{array}{c}dm-bn\\ -cm+an \end{array}\right),$
showing that $m$ and $n$ both divide $x$ and $y$.
-
HINT $\$ (excerpted from my answer to a similar question a few months ago)
Generally, inverting a linear map by Cramer's Rule (multiplying by the adjugate) yields
$$\rm\begin{pmatrix} a & \rm b \\\\ \rm c & \rm d \end{pmatrix}\ \begin{pmatrix} x \\\\ \rm y \end{pmatrix}\ =\ \begin{pmatrix} X \\\\ \rm Y\end{pmatrix}\ \ \ \Rightarrow\ \ \ \begin{array} \rm\Delta\ x\ \ \ =\ \ \ \rm d\ X - b\ Y \\\\ \rm\Delta\ y\ =\ \rm -c\ X + a\ Y \end{array}\ ,\quad\quad \Delta\ =\ ad-bc\ \$$
Therefore $\rm\ n\ |\ X,Y\ \Rightarrow\ n\ |\ \Delta\:x,\:\Delta\:y\ \Rightarrow\ n\ |\ gcd(\Delta\:x,\Delta\:y)\ =\ \Delta\ gcd(x,y)\:.$
-
HINT $\$ Reduce to the case $\:(x,y) = 1\:$ by cancelling any gcd. Now Bezout's GCD identity implies that $\ (x,y) = 1\$ iff it is a column in a matrix of determinant $1\:.\:$ Therefore
$$(x,y) = 1 \ \Rightarrow\ 1\ = \ \begin{vmatrix} a & b \\ c & d\end{vmatrix}\ \begin{vmatrix} x & u \\ y & v \end{vmatrix}\ =\ \begin{vmatrix} a\:x+b\:y & s \\ c\:x+d\:y & t \end{vmatrix} \ \Rightarrow\ (a\:x+b\:y,\ c\:x+d\:y)\ =\ 1$$
The converse follows the same way using the inverse transformation (or by easy arithmetic).
- |
proofpile-shard-0030-61 | {
"provenance": "003.jsonl.gz:62"
} | # Exterior Algebra Notes #4: The Interior Product
[January 27, 2019]
Vector spaces are assumed to be finite-dimensional and over $\bb{R}$. The grade of a multivector $\alpha$ will be written $\| \alpha \|$, while its magnitude will be written $\vert \alpha \vert$. Bold letters like $\b{u}$ will refer to (grade-1) vectors, while Greek letters like $\alpha$ refer to arbitrary multivectors with grade $\| \alpha \|$.
More notes on exterior algebra. This time, the interior product $\alpha \cdot \beta$, with a lot more concrete intuition than you’ll see anywhere else.
I am not the only person who has had trouble figuring out what the interior product is for. This is what I have so far…
## 1. The Interior Product
The last main tool of exterior algebra is the interior product, written $\alpha \cdot \beta$ or $\iota_{\alpha} \beta$. It subtracts grades ($\| \alpha \cdot \beta \| = \| \beta \| - \| \alpha \|$) and, conceptually, does something akin to ‘dividing $\alpha$ out of $\beta$’. It’s also called the ‘contraction’ or ‘insertion’ operator. We use the same symbol as the inner product because we think of it as a generalization of the inner product: when $\| \alpha \| = \| \beta \|$, then $% %]]>$.
Its abstract definition is that it is adjoint to the wedge product with respect to the inner product:
In practice this means that it sort of ‘undoes’ wedge products, as we will see.
When we looked at the inner product we had a procedure for computing $% %]]>$. We switched from the $\^$ inner product to the $\o$ inner product, by writing both sides as tensor products, with the right side antisymmetrized using $\text{Alt}$:1
Interior products directly generalize inner products to cases where the left side has a lower grade2, (which is why we use $\cdot$ for both), and can be computed with the exact same procedure:
A general formula for the interior product of a vector with a multivector, which can be deduced from the above, is
The intuitive meaning of the interior product is related to projection. We can construct the projection and rejection operators of a vector onto a multivector with:
To understand this, recall that the classic formula for projecting onto a unit vector is:
That is, we find the scalar coordinate along $\b{a}$, then multiply by $\b{a}$ once again. With multivectors, $\b{a} \cdot \beta$ is not a scalar, so we can’t just use scalar multiplication – so it makes some sense that it would be replaced with $\^$.3
The classic vector rejection formula is
Using the interior product we can write this as
The multivector version $\b{a} \^ \beta$ is only non-zero if $\b{\beta}$ has a component which does not contain $\b{a}$ – all $\b{a}$-ness is removed by the wedge product, leaving something like $\b{a} \^ \beta_{\perp \b{a}}$. Then $\b{a} \cdot \b{a} \^ \beta_{\perp \b{a}} = \beta_{\perp \b{a}}$.
The correct interpretation of $\b{a} \cdot \beta$, then, is a lot like what it means when $\beta = \b{b}$: it’s finding the ‘$\b{a}$-component’ of $\beta$. It’s just that, when $\beta$ is a multivector, the ‘$\b{a}$-coordinate’ is no longer a scalar.
For example this is the ‘$\b{x}$‘-component of a bivector $\b{b \^ c}$:
Note that the result doesn’t have any $\b{x}$ factors in it.
What about $\alpha \cdot \beta$, where $\alpha$ is a multivector? It’s still true that $\alpha \cdot \beta$ gives the ‘$\alpha$-coordinate’ of $\beta$, if there is one. But the rejection formula doesn’t work – we can only use $\beta_{\perp \alpha} = \beta - \frac{1}{\Vert \alpha \Vert^2} \alpha \^ (\alpha \cdot \beta)$. The problem is that there are cases where both $\alpha \^ \beta = \alpha \cdot \beta = 0$, such as for $\b{x \^ y}$ and $\b{y \^ z}$.4
If we consider our projection/rejection operations as operators, writing $L_{\b{a}} \beta = \b{a} \^ \beta$ and $\iota_{\b{a}} \beta = \b{a} \cdot \beta$, then:
Since $\iota^2 = L^2 = 0$, this could also be written as
And in fact this works (although the interpretation is trickier) with different vectors for each term:
There is a lot of interesting structure here which is worth diving into in the future. It turns out to be related to a lot of other mathematics. The short version is that $\iota$ is, technically, a “graded derivation” on the exterior algebra, and the property that $\iota L + L \iota = I$ is the exterior-algebra equivalent of the fact that $\p_x x - x \p_x = 1$ on derivatives (in the sense that $(xf)' - x f' = f$).
If we are keeping track of vector space duality, the left side of an interior product $\alpha \cdot \beta$ should transform like a dual multivector. (It certainly seems like it should because the left side of an inner product $% %]]>$ should.) More on that later.
The discussion about projection above seems to me to strongly suggest that we define $\frac{\iota_{\b{a}}}{\vert \b{a} \vert^2} = \b{a}^{-1}$ as a sort of ‘multiplicative inverse’ of $\b{a}$. It’s not a complete inverse, because $\b{a} \^ \b{a}^{-1} \^ \beta = \beta_{\b{a}}$. Instead of being invertible, dividing and then multiplying pulls out the projection on $\b{a}$. There is a certain elegance to it.
In fact there is an argument to be made that interior products $\iota_{\alpha}$, and dual vectors in general, should be considered as negative-grade multivectors, so $\iota_{\alpha} \in \^^{- \| \alpha \|} V$. Then we could write that $\alpha \cdot \beta \in \^^{\| \beta \| - \| \alpha \|} V$ even if $\alpha$ has the higher grade. This is also compelling because it explains why dual vectors transform according to the inverse of a transformation: if $\alpha \ra A^{\^k}(\alpha)$, of course $\iota_{\alpha} \ra A^{-\^ k} (\iota_{\alpha})$. Something to think about! I hope to look into it in a later article.
## 2. More identities
We can use $\iota$ to prove a few more vector identities.
Here’s the vector triple product:
The Jacobi Identity:
The Jacobi Identity can also be rearranged into the following intriguing form, which we will have to figure out someday (it has some relationship to Lie algebras).
The second line is our previous expansion of $\b{a} \cdot (\b{b \^ c})$. How could these be equal?
One case where the interior product is already being used in mathematics is when multiplying by an antisymmetric matrix. A bivector $\b{b \^ c}$ can be represented as a tensor product $\b{b \o c - c \o b}$, which can be treated as an antisymmetric matrix. The interior product $\b{a} \cdot (\b{b \^ c})$ is then equivalent to matrix multiplication:
For instance this is one way of writing a rotation operator which rotates vectors by $\frac{\pi}{2}$ in the $\b{bc}$ plane (if $\b{b}, \b{c}$ are unit vectors):
The Hodge Star can be written as an interior product with the pseudoscalar. In $\bb{R}^3$:
This is probably the better definition. One reason is that it suggests that $\star$ is not so special, and, for instance, we might allow ourselves to take a $\star$ in a subspace. For instance while working in $\bb{R}^3$ here is another way to write the rotation operator $R_{\b{xy}}$:
Other articles related to Exterior Algebra:
1. Recall that we basically elect to antisymmetrize one side because if we did both we would need an extra factor of $1/n!$ for the same result. It might be that there are abstractions of this where you do need to do both sides (for instance if $a \cdot b \neq b \cdot a$?)
2. It is probably possible to generalize to either side having the lower grade, but it’s not normally done that way. I want to investigate it sometime.
3. the other candidate would be $\o$, but we’d like the result to also be a multivector so it makes sense to only consider $\^$
4. I think there’s a way to make it work. It looks something like: for each basis multivector of lower grade, remove it from both sides, like $(\b{x} \cdot \alpha) \cdot (\b{x} \cdot \beta)$. But that’s complicated and will have to be saved for the future. |
proofpile-shard-0030-62 | {
"provenance": "003.jsonl.gz:63"
} | ## Precalculus (6th Edition) Blitzer
The number of miles is $5.5\text{ miles}$.
As per the question, $P\left( x \right)=14.7{{e}^{-0.21x}}$ Substitute the values of $P\left( x \right)$ in the above equation and simplify: $4.6=14.7{{e}^{-0.21x}}$ And divide both sides of the equation by $14.7$ and simplify as follows: \begin{align} & \frac{4.6}{14.7}=\frac{14.7}{14.7}{{e}^{-0.21x}} \\ & 0.31={{e}^{-0.21x}} \end{align} Write the above exponential form in logarithmic form using: If ${{b}^{y}}=x$, then ${{\log }_{b}}x=y$ So, $0.31={{e}^{-0.21x}}$ is equivalent to ${{\log }_{e}}0.31=-0.21x$. Solving the above expression as follows: \begin{align} & {{\log }_{e}}0.31=-0.21x \\ & -1.17=-0.21x \end{align} Change the sides of the equation and divide both sides by $-0.21$, \begin{align} & \left( \frac{-0.21}{-0.21} \right)x=\frac{-1.17}{-0.21} \\ & x=5.5 \end{align} Thus, the peak of Mt. Everest is $5.5\text{ miles}$ above sea level. |
proofpile-shard-0030-63 | {
"provenance": "003.jsonl.gz:64"
} | A "right moving" solution to the wave equation is: $$f_R(z,t) = A \cos(kz – \omega t + \delta)$$ Which of these do you prefer for a "left moving" soln? 1. $f_L(z,t) = A \cos(kz + \omega t + \delta)$ 2. $f_L(z,t) = A \cos(kz + \omega t - \delta)$ 3. $f_L(z,t) = A \cos(-kz – \omega t + \delta)$ 4. $f_L(z,t) = A \cos(-kz – \omega t - \delta)$ 5. more than one of these! (Assume $k, \omega, \delta$ are positive quantities) Note: * All of them could be because cos(x) = cos(-x)
Two different functions $f_1(x,t)$ and $f_2(x,t)$ are solutions of the wave equation. $$\dfrac{\partial^2 f}{\partial x^2} = \dfrac{1}{c^2}\dfrac{\partial^2 f}{\partial t^2}$$ Is $(A f_1 + B f_2 )$ also a solution of the wave equation? 1. Yes, always 2. No, never 3. Yes, sometimes depending on $f_1$ and $f_2$ Note: * Correct answer: A
Two traveling waves 1 and 2 are described by the equations: $$y_1(x,t) = 2 \sin(2x – t)$$ $$y_2(x,t) = 4 \sin(x – 0.8 t)$$ All the numbers are in the appropriate SI (mks) units. Which wave has the higher speed? 1. 1 2. 2 3. Both have the same speed Note: * Correct Answer: B
Two impulse waves are approaching each other, as shown. Which picture correctly shows the total wave when the two waves are passing through each other? <img src="./images/two_waves.png" align="center" style="width: 400px";/> Note: * Correct Answer: D
A solution to the wave equation is: $$f(z,t) = A \cos(kz – \omega t + \delta)$$ * What is the speed of this wave? * Which way is it moving? * If $\delta$ is small (and >0), is this wave "delayed" or "advanced"? * What is the frequency? * The angular frequency? * The wavelength? * The wave number?
A solution to the wave equation is: $$f(z,t) = Re\left[A e^{i(kz – \omega t + \delta)}\right]$$ * What is the speed of this wave? * Which way is it moving? * If $\delta$ is small (and >0), is this wave "delayed" or "advanced"? * What is the frequency? * The angular frequency? * The wavelength? * The wave number?
A complex solution to the wave equation in 3D is: $$\widetilde{f}(\mathbf{r},t) = \widetilde{A}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}$$ * What is the speed of this wave? * Which way is it moving? * Why is there no $\delta$? * What is the frequency? * The angular frequency? * The wavelength? * The wave number? |
proofpile-shard-0030-64 | {
"provenance": "003.jsonl.gz:65"
} | # Él es un hombre. rewrite into plural form in Spanish
###### Question:
Él es un hombre. rewrite into plural form in Spanish
### I need help with this math problem
I need help with this math problem...
### Evaluate the limit, if it exists. Show work. lim┬(x→5)〖(x^2-3x-10)/(2x-10)〗
Evaluate the limit, if it exists. Show work. lim┬(x→5)〖(x^2-3x-10)/(2x-10)〗...
### What is a Risk Factor? Give one example of a controllable risk factor and one example of an uncontrollable risk factor
What is a Risk Factor? Give one example of a controllable risk factor and one example of an uncontrollable risk factor...
### What was the main concern of the portion of Roosevelt audience that supported the united states?
What was the main concern of the portion of Roosevelt audience that supported the united states?...
### HELP......HELP........HELP............FASTTTTT
HELP......HELP........HELP............FASTTTTT...
### Hurry, being timed. ,,,,,,,,,...................!!!!!!!!!!!
hurry, being timed. ,,,,,,,,,...................!!!!!!!!!!!...
### Clare has a recipe for yellow cake. She uses 913 cups of flour to make 4 cakes. Noah will follow the same recipe. He will make c cakes using f cups of flour. Which of these equations represent the relationship between c and f?
Clare has a recipe for yellow cake. She uses 913 cups of flour to make 4 cakes. Noah will follow the same recipe. He will make c cakes using f cups of flour. Which of these equations represent the relationship between c and f?...
### Problemas de razonamiento división de números decimales. Ayer Susana se fue de viaje a visitar a unos familiares. Recorrió 135,75 km en total, sin hacer ninguna parada en el camino, y tardó en llegar a su destino justo 1,5 horas. ¿A qué velocidad media condujo
Problemas de razonamiento división de números decimales. Ayer Susana se fue de viaje a visitar a unos familiares. Recorrió 135,75 km en total, sin hacer ninguna parada en el camino, y tardó en llegar a su destino justo 1,5 horas. ¿A qué velocidad media condujo...
### Dentro de america latina,los paises pueden agruparse de acuerdo con el idioma que predomina .Entonces¿coml se divide?
Dentro de america latina,los paises pueden agruparse de acuerdo con el idioma que predomina .Entonces¿coml se divide?...
### Explain Aristotle's three forms of the friendship in terms of are the all created equal? If Aristotle see's one of them as best ,why does he consider it best?
Explain Aristotle's three forms of the friendship in terms of are the all created equal? If Aristotle see's one of them as best ,why does he consider it best?...
### A 0.1 kg arrow with an initial velocity of 30 m/s hits a 4.0 kg melon initially at rest on a friction-less surface. The arrow emerges out the other side of the melon with a speed of 20 m/s. What is the speed of the melon? Why would we normally not expect to see the melon move with the is speed after being hit by the arrow?
A 0.1 kg arrow with an initial velocity of 30 m/s hits a 4.0 kg melon initially at rest on a friction-less surface. The arrow emerges out the other side of the melon with a speed of 20 m/s. What is the speed of the melon? Why would we normally not expect to see the melon move with the is speed after...
### When playing golf, people want to get as low a score as possible. Scores less than 0 are very good. If Jeannie has a golf score less than –6, it will be her all-time best score. Which inequality shows the scores that would give her an all-time best?
When playing golf, people want to get as low a score as possible. Scores less than 0 are very good. If Jeannie has a golf score less than –6, it will be her all-time best score. Which inequality shows the scores that would give her an all-time best?...
### How does the title of the poem “After Hours in Kindergarten” contribute to the development of the poem’s theme?
How does the title of the poem “After Hours in Kindergarten” contribute to the development of the poem’s theme?...
### Match the three states of matter
Match the three states of matter...
### Does1/3 & 7/21 form a proportion
does1/3 & 7/21 form a proportion...
### The result of the Spanish-American War was which of the following? 1) Spain acquired Puerto Rico 2) Cuba gained commonwealth status 3) The United States gained Cuba as a colony 4) Cuba became independent of Spain
The result of the Spanish-American War was which of the following? 1) Spain acquired Puerto Rico 2) Cuba gained commonwealth status 3) The United States gained Cuba as a colony 4) Cuba became independent of Spain...
### What is value of x? Enter your answer in the box. x =
What is value of x? Enter your answer in the box. x =...
### Write the name of any two agriculture machine!
Write the name of any two agriculture machine!... |
proofpile-shard-0030-65 | {
"provenance": "003.jsonl.gz:66"
} | Math Help - How do I find a random point on the hypotenuse?
1. How do I find a random point on the hypotenuse?
I'm trudging through 5-6 years worth of forgotten math lessons and can't remember how to do this. I have a right triangle, the two sides both are equal to 30 so the angles are a clean 45 degrees a piece. I need to know how to determine the x,y coords of a random point on the hypotenuse. I have a diagram but forgive it's crudeness, it's just an example:
Those red dots are what I need to figure out. Ignore their position, since it's just an example. If I pointed to a random spot on that line, what math do I need to figure out the coords?
2. Hello, worldspawn!
The points are on the line: . $x + y \:=\:30\quad\Rightarrow\quad y \:=\:30- x$
So given an $x$-value, say, $x = 7$, then: . $y \:=\:30 - 7 \:=\:23$
Therefore, the point is: . $(7,\,23)$
3. Originally Posted by worldspawn
I'm trudging through 5-6 years worth of forgotten math lessons and can't remember how to do this. I have a right triangle, the two sides both are equal to 30 so the angles are a clean 45 degrees a piece. I need to know how to determine the x,y coords of a random point on the hypotenuse. I have a diagram but forgive it's crudeness, it's just an example:
Those red dots are what I need to figure out. Ignore their position, since it's just an example. If I pointed to a random spot on that line, what math do I need to figure out the coords?
You need to use Deep Blue.
4. Selecting any point on that line segment is a uniform distribution. We can model it this way: select a number uniformly in the interval [0,30]. Call it a, then the corresponding point on the line segment is (a,30-a). |
proofpile-shard-0030-66 | {
"provenance": "003.jsonl.gz:67"
} | Rs Aggrawal 2020 2021 Solutions for Class 6 Maths Chapter 1 Number System are provided here with simple step-by-step explanations. These solutions for Number System are extremely popular among Class 6 students for Maths Number System Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rs Aggrawal 2020 2021 Book of Class 6 Maths Chapter 1 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rs Aggrawal 2020 2021 Solutions. All Rs Aggrawal 2020 2021 Solutions for class Class 6 Maths are prepared by experts and are 100% accurate.
#### Page No 5:
(i) Nine thousand eighteen = 9018
(ii) Fifty-four thousand seventy-three = 54073
(iii) Three lakh two thousand five hundred six = 302506
(iv) Twenty lakh ten thousand eight = 2010008
(v) Six crore five lakh fifty-seven = 60500057
(vi) Two crore two lakh two thousand two hundred two = 20202202
(vii) Twelve crore twelve lakh twelve thousand twelve = 121212012
(viii) Fifteen crore fifty lakh twenty thousand sixty-eight = 155020068
#### Page No 5:
(i) 63,005 = Sixty-three thousand five
(ii) 7,07,075 = Seven lakh seven thousand seventy-five
(iii) 34,20,019 = Thirty-four lakh twenty thousand nineteen
(iv) 3,05,09,012 = Three crore five lakh nine thousand twelve
(v) 5,10,03,604 = Five crore ten lakh three thousand six hundred four
(vi) 6,18,05,008 = Six crore eighteen lakh five thousand eight
(vii) 19,09,09,900 = Nineteen crore nine lakh nine thousand nine hundred
(viii) 6,15,30,807 = Six crore fifteen lakh thirty thousand eight hundred seven
(ix) 6,60,60,060 = Six crore sixty lakh sixty thousand sixty
#### Page No 5:
(i) 15,768 = (1 x 10000) + (5 x 1000) + (7 x 100) + (6 x 10) + (8 x 1)
(ii) 3,08,927 = (3 x 100000) + (8 x 1000) + (9 x 100) + (2 x 10) + (7 x 1)
(iii) 24,05,609 = (2 x 1000000) + (4 x 100000) + (5 x 1000) + (6 x 100) + (9 x 1)
(iv) 5,36,18,493 = (5 x 10000000) + (3 x 1000000) + (6 x 100000) + (1 x 10000) + (8 x 1000) + (4 x 100) + (9 x 10) + (3 x 1)
(v) 6,06,06,006 = (6 x 10000000) + (6 x 100000) + (6 x 1000) + (6 x 1)
(iv) 9,10,10,510 = (9 x 10000000) + (1 x 1000000) + (1 x 10000) + (5 x 100) + (1 x 10)
#### Page No 6:
(i) 6 × 10000 + 2 × 1000 + 5 × 100 + 8 × 10 + 4 x 1 = 62,584
(ii) 5 × 100000 + 8 × 10000 + 1 × 1000 + 6 × 100 + 2 × 10 + 3 × 1 = 5,81,623
(iii) 2 × 10000000 + 5 × 100000 + 7 × 1000 + 9 × 100 + 5 × 1 = 2,05,07,905
(iv) 3 × 1000000 + 4 × 100000 + 6 × 1000 + 5 × 100 + 7 × 1 = 34,06,507
#### Page No 6:
The place value of 9 at ten lakhs place = 90 lakhs = 9000000
The place value of 9 at hundreds place = 9 hundreds = 900
$\therefore$ Required difference = (9000000 ‒ 900) = 8999100
#### Page No 6:
The place value of 7 in 27650934 = 70 lakhs = 70,00,000
The face value of 7 in 27650934 = 7
$\therefore$ Required difference = (7000000 ‒ 7) = 69,99,993
#### Page No 6:
The largest 6-digit number = 999999
The smallest 6-digit number = 100000
$\therefore$ Total number of 6-digit numbers = (999999 ‒ 100000) + 1
= 899999 + 1
= 900000
= 9 lakhs
#### Page No 6:
The largest 7-digit number = 9999999
The smallest 7-digit number = 1000000
∴ Total number of 7-digit numbers = (9999999 - 1000000) + 1
= 8999999 + 1
= 9000000
= Ninety lakhs
#### Page No 6:
One lakh (1,00,000) is equal to one hundred thousand (100 $×$ 1000).
Thus, one hundred thousands make a lakh.
#### Page No 6:
One crore (1,00,00,000) is equal to one hundred lakh (10,000 $×$ 1,000).
Thus, 10,000 thousands make a crore.
#### Page No 6:
The given number is 738.
On reversing the digits of this number, we get 837.
∴ Required difference = 837 ‒ 738 = 99
#### Page No 6:
The number just after 9547999 is 9547999 + 1 = 9548000.
#### Page No 6:
The number just before 9900000 is 9900000 ‒ 1 = 9899999.
#### Page No 6:
The number just before 10000000 is 10000000 ‒ 1 = 9999999.
#### Page No 6:
The 3-digit numbers formed by 2, 3 and 4 by taking each digit only once are 234, 324, 243, 342, 423 and 432.
#### Page No 6:
The smallest number formed by using each of the given digits (i.e, 3,1,0,5 and 7) only once is 10357.
#### Page No 6:
The largest number formed by using each of the given digits only once is 964320.
#### Page No 6:
Representation of the numbers on the international place-value chart:
Periods Millions Thousands Ones Place Hundred millions Ten millions Millions Hundred thousands Ten thousands Thousands Hundreds Tens Ones HM TM M H Th T Th Th H T O (i) 7 3 5 8 2 1 (ii) 6 0 5 7 8 9 4 (iii) 5 6 9 4 3 8 2 1 (iv) 3 7 5 0 2 0 9 3 (v) 8 9 3 5 0 0 6 4 (vi) 9 0 7 0 3 0 0 6 Crore Ten lakhs Lakhs Ten Thousand Thousand Hundred Tens Ones
The number names of the given numbers in the international system:
(i) 735,821 = Seven hundred thirty-five thousand eight hundred twenty-one
(ii) 6,057,894 = Six million fifty-seven thousand eight hundred ninety-four
(iii) 56,943,821 = Fifty-six million nine hundred forty-three thousand eight hundred twenty-one
(iv) 37,502,093 = Thirty-seven million five hundred two thousand ninety-three
(v) 89,350,064 = Eighty-nine millions three hundred fifty thousand sixty-four
(vi) 90,703,006 = Ninety million seven hundred three thousand and six
#### Page No 6:
Periods Millions Thousands Ones Place Hundred millions Ten millions Millions Hundred thousands Ten thousands Thousands Hundreds Tens Ones HM TM M H Th T Th Th H T O (i) 3 0 1 0 5 0 6 3 (ii) 5 2 2 0 5 0 0 6 (iii) 5 0 0 5 0 0 5
#### Page No 8:
1003467 $>$ 987965
We know that a 7-digit number is always greater than a 6-digit number. Since 1003467 is a 7-digit number and 987965 is a 6-digit number, 1003467 is greater than 987965.
#### Page No 8:
3572014 $<$ 10235401
We know that a 7-digit number is always less than an 8-digit number. Since 3572014 is a 7-digit number and 10235401 is an 8-digit number, 3572014 is less than 10235401.
#### Page No 8:
Both the numbers have the digit 3 at the ten lakhs places.
Also, both the numbers have the digit 2 at the lakhs places.
However, the digits at the ten thousands place in 3254790 and 3260152 are 5 and 6, respectively.
Clearly, 5 < 6
∴ 3254790 < 3260152
#### Page No 8:
Both have the digit 1 at the crores places.
However, the digits at the ten lakhs places in 10357690 and 11243567 are 0 and 1, respectively.
Clearly, 0 < 1
∴ 10357690 < 11243567
#### Page No 8:
27596381 > 7965412
We know that an 8-digit number is always greater than a 7-digit number. Since 7965412 is a 7-digit number and 27596381 is an 8-digit number, 27596381 is greater than 7965412.
#### Page No 8:
Both the numbers have the same digits, namely 4, 7, 8 and 9, at the crores, ten lakhs, lakhs and ten thousands places, respectively.
However, the digits at the thousands place in 47893501 and 47894021 are 3 and 4, respectively.
Clearly, 3 < 4
∴ 47893501 < 47894021
#### Page No 8:
102345680 is a 9-digit number.
63521047 and 63514759 are both 8-digit numbers.
Both the numbers have the same digits, namely 6, 3 and 5, at the crores, ten lakhs and lakhs places, respectively.
However, the digits at the ten thousands place in 63521047 and 63514759 are 2 and 1, respectively.
Clearly, 2 > 1
∴ 63521047 > 63514759
7355014 and 7354206 are both 7-digit numbers.
Both the numbers have the same digits, namely 7, 3 and 5 at the crores, ten lakhs and lakhs places, respectively.
However, the digits at the ten thousands place in 7355014 and 7354206 are 5 and 4, respectively.
Clearly, 5> 4
∴ 7355014 > 7354206
The given numbers in descending order are:
102345680 > 63521047 > 63514759 > 7355014 > 7354206
#### Page No 8:
23794206 and 23756819 are both 8-digit numbers.
Both the numbers have the same digits, namely 2, 3 and 7 at the crores, ten lakhs and lakhs places, respectively.
However, the digits at the ten thousands place in
23794206 and 23756819 are 9 and 5, respectively.
Clearly, 9 > 5
∴ 23794206 > 23756819
5032790 and 5032786 are both 7-digit numbers.
Both the numbers have the same digits, namely 5, 0, 3, 2 and 7, at the ten lakhs, lakhs, ten thousands, thousands and hundreds places, respectively.
However,
the digits at the tens place in
5032790 and 5032786 are 9 and 8, respectively.
Clearly, 9 > 8
5032790 > 5032786
987876 is a 6-digit number.
The given numbers in descending order are:
23794206 > 23756819 > 5032790 > 5032786 > 987876
#### Page No 8:
16060666 and 16007777 are both 8-digit numbers.
Both the numbers have the same digits, namely 1, 6 and 0, at the crores, ten lakhs and lakhs places, respectively.
However, the digits at the ten thousands place in 16060666 and 16007777 are 6 and 0, respectively.
Clearly, 6 > 0
∴ 16060666 > 16007777
1808090 and 1808088 are both 7-digit numbers.
Both the numbers have the same digits , namely 1, 8, 0, 8 and 0, at the ten lakhs, lakhs, ten thousands, thousands and hundreds places, respectively.
However, the digits at the tens place in 1808090 and 1808088 are 9 and 8, respectively.
Clearly, 9 > 8
∴ 1808090 > 1808088
190909 and 181888 are both 6-digit numbers.
Both the numbers have the same digit, 1, at the lakhs place.
However, the digits at the ten thousands place in 190909 and 181888 are 9 and 8, respectively.
Clearly, 9 > 8
∴ 190909 > 181888
Thus, the given numbers in descending order are:
16060666 > 16007777 > 1808090 > 1808088 >190909 > 181888
#### Page No 8:
1712040, 1704382 and 1702497 are all 7-digit numbers.
The three numbers have the same digits, namely 1 and 7, at the ten lakhs and lakhs places, respectively.
However, the digits at the ten thousands place in
1712040, 1704382 and 1702497 are 1, 0 and 0.
∴ 1712040 is the largest.
Of the other two numbers, the respective digits at the thousands place are 4 and 2.
Clearly, 4 > 2
∴ 1704382 > 1702497
201200, 200175 and 199988 are all 6-digit numbers.
At the lakhs place, we have 2 > 1.
So, 199988 is the smallest of the three numbers.
The other two numbers have the same digits, namely 2 and 0, at the lakhs and ten thousands places, respectively.
However, the digits at the thousands place in
201200 and 200175 are 1 and 0, respectively.
Clearly, 1 > 0
∴ 201200 > 200175
The given numbers in descending order are:
1712040 > 1704382 > 1702497 > 201200 > 200175 > 199988
#### Page No 8:
990357 is 6 digit number.
9873426 and 9874012 are both 7-digit numbers.
Both the numbers have the same digits, namely 9, 8 and 7, at the ten lakhs, lakhs and ten thousands places, respectively.
However, the digits at the thousands place in 9873426 and 9874012
are 3 and 4, respectively.
Clearly, 4 < 7
∴ 9873426 < 9874012
24615019 and 24620010 are both 8-digit numbers.
Both the numbers have the same digits, namely 2, 4 and 6, at the crores, ten lakhs and lakhs places, respectively.
However, the digits at the ten thousands place in 24615019 and 24620010
are 2 and 1, respectively.
Clearly, 1 < 2
∴ 24615019 < 24620010
The given numbers in ascending order are:
990357 < 9873426 < 9874012 < 24615019 < 24620010
#### Page No 8:
5694437 and 5695440 are both 7-digit numbers.
Both have the same digit, i.e., 5 at the ten lakhs place.
Both have the same digit, i.e., 6 at the lakhs place.
Both have the same digit, i.e., 9
at the ten thousands place.
However, the digits at the thousand place in 5694437 and 5695440 are 4 and 5, respectively.
Clearly, 4 < 5
∴ 5694437 < 5695440
56943201, 56943300 and 56944000 are all 8-digit numbers.
They have the same digit, i.e., 5 at the crores place.
They have the same digit, i.e., 6 at the ten lakhs place.
They have the same digit, i.e., 9 at the lakhs place.
They have the same digit, i.e., 4
at the ten thousands place.
However, at the thousands place, one number has 4 while the others have 3 .
∴ 56944000 is the largest.
The other two numbers have 3 and 2 at their hundreds places.
Clearly, 2 <3
∴ 56943201 < 56943300
The given numbers in ascending order are:
5694437 < 5695440 < 56943201 < 56943300 < 56944000
#### Page No 8:
700087 is 6-digit number.
8014257, 8014306 and 8015032 are all 7-digit numbers.
They have the same digits, namely 8, 0 and 1, at the ten lakhs, lakhs and ten thousands places, respectively.
But, at the thousands place, one number has 5 while the other two numbers have 4.
Here, 801503 is the largest.
The other two numbers have 2 and 3 at their hundreds places.
Clearly, 2 < 3
∴ 8014306 < 8015032
10012458 is an 8-digit number.
The given numbers in ascending order are:
700087 < 8014257 < 8014306 < 8015032 < 10012458
#### Page No 8:
893245, 893425 and 980134 are all 6-digit numbers.
Among the three, 980134 is the largest.
The other two numbers have the same digits, namely 8, 9 and 3, at the lakhs, ten thousands and thousands places, respectively.
However, the digits at
the hundreds place in 893245 and 893425 are 2 and 4, respectively.
Clearly, 2 < 4
∴ 893245 < 893425
1020216, 1020304 and 1021403 are all 7-digit numbers.
They have the same digits, namely 1, 0 and 2, at the ten lakhs, lakhs and ten thousands places, respectively.
At the thousands place,
1021403 has 1.
The other two numbers have the digits 2 and 3 at their hundreds places.
Clearly, 2 < 3
∴ 1020216 < 1020304
The given numbers in ascending order are:
893245 < 893425 < 980134 < 1020216 < 1020304 < 1021403
#### Page No 11:
Number of persons who visited the holy shrine in the first year = 13789509
Number of persons who visited the holy shrine in the second year = 12976498
∴ Number of persons who visited the holy shrine during these two years = 13789509 + 12976498 = 26766007
#### Page No 11:
Bags of sugar produced by the first factory in last year = 24809565
Bags of sugar produced by the second factory in last year = 18738576
Bags of sugar produced by the third sugar factory in last year = 9564568
∴ Total number of bags of sugar were produced by the three factories during last year = 24809565 + 18738576 + 9564568
= 53112709
#### Page No 11:
New number = Sum of 37684955 and 3615045
= 37684955 + 3615045
= 41300000
#### Page No 11:
Total number of votes received by the three candidates = 687905 + 495086 + 93756 = 1276747
Number of invalid votes = 13849
Number of persons who did not vote = 25467
∴ Total number of registered voters = 1276747 + 13849 + 25467
= 1316063
#### Page No 11:
People who had only primary education = 1623546
People who had secondary education = 9768678
People who had higher education = 6837954
Illiterate people in the state = 2684536
Children below the age of school admission = 698781
∴ Total population of the state = 1623546 + 9768678 + 6837954 + 2684536 + 698781
= 21613495
#### Page No 11:
Bicycles produced by the company in the first year = 8765435
Bicycles produced by the company in the second year = 8765435 + 1378689
= 10144124
∴ Total number of bicycles produced during these two years = 8765435 + 10144124
= 18909559
#### Page No 11:
Sale receipts of a company during the first year = Rs 20956480
Sale receipts of the company during the second year = Rs 20956480 + Rs 6709570
= Rs 27666050
∴ Total number of sale receipts of the company during these two years = Rs 20956480 + Rs 27666050
= Rs 48622530
#### Page No 11:
Total population of the city = 28756304
Number of males in the city = 16987059
∴ Number of females in the city = 28756304 ‒ 16987059
= 11769245
#### Page No 12:
Required number = 13246510 ‒ 4658642 = 8587868
∴ 13246510 is larger than 4658642 by 8587868.
#### Page No 12:
Required number = 1 crore ‒ 564387
= 10000000 ‒ 5643879
= 4356121
∴ 5643879 is smaller than one crore by 4356121.
#### Page No 12:
11010101 ‒ required number = 2635967
Thus, required number = 11010101 ‒ 2635967
= 8374134
∴ The number 8374134 must be subtracted from 11010101 to get 2635967.
#### Page No 12:
Sum of the two numbers = 10750308
One of the number = 8967519
∴ The other number = 10750308 ‒ 8967519
= 1782789
#### Page No 12:
Initial amount with the man = Rs 20000000
Amount spent on buying a school building = Rs 13607085
∴ Amount left with the man = Rs 20000000 ‒ Rs 13607085
= Rs 6392915
#### Page No 12:
Money need by the society to buy the property = Rs 18536000
Amount collected as membership fee = Rs 7253840
Amount taken on loan from the bank = Rs 5675450
Amount collected as donation = Rs 2937680
∴ Amount of money short = Rs 18536000 ‒ (Rs 7253840 + Rs 5675450 + Rs 2937680)
= Rs 18536000 ‒ Rs 15866970
= Rs 2669030
#### Page No 12:
Initial amount with the man = Rs 10672540
Amount given to his wife = Rs 4836980
Amount given to his son = Rs 3964790
∴ Amount received by his daughter = Rs 10672540 ‒ (Rs 4836980 + Rs 3964790)
= Rs 10672540 ‒ Rs 8801770
= Rs 1870770
#### Page No 12:
Cost of one chair = Rs 1485
Cost of 469 chairs = Rs 1485 $×$ 469
= Rs 696465
∴ Cost of 469 chairs is Rs 696465.
#### Page No 12:
Contribution from one student for the charity program = Rs 625
Contribution from 1786 students = Rs 625 x 1786 = Rs 1116250
∴ Rs 1116250 was collected from 1786 students for the charity program.
#### Page No 12:
Number of screws produced by the factory in one day = 6985
Number of screws produced in 358 days = 6985 x 358
= 2500630
∴ The factory will produce 2500630 screws in 358 days.
#### Page No 12:
We know that
1 year = 12 months
13 years = 13 x 12 = 156 months
Now, we have:
Amount saved by Mr Bhaskar in one month = Rs 8756
Amount saved in 156 months = Rs 8756 $×$ 156 = Rs 1365936
∴ Mr Bhaskar will save Rs 1365936 in 13 years.
#### Page No 12:
Cost of one scooter = Rs 36725
Cost of 487 scooter = Rs 36725 $×$ 487
= Rs 17885075
∴ The cost of 487 scooters will be Rs 17885075.
#### Page No 12:
Distance covered by the aeroplane in one hour = 1485 km
Distance covered in 72 hours = 1485 km $×$ 72 = 106920 km
∴ The distance covered by the aeroplane in 72 hours will be 106920 km.
#### Page No 12:
Product of two numbers = 13421408
One of the number = 364
∴ The other number = 13421408 ÷ 364
= 36872
#### Page No 12:
Cost of 36 flats = Rs 68251500
Cost of one flat = Rs 68251500 ÷ 36
= Rs 1895875
∴ Each flat costs Rs 1895875.
#### Page No 12:
We know that 1 kg = 1000 g
Now, mass of the gas-filled cylinder = 30 kg 250 g = 30.25 kg
Mass of an empty cylinder = 14 kg 480 g = 14.48 kg
∴ Mass of the gas contained in the cylinder = 30.25 kg ‒ 14.48 kg
= 15.77 kg = 15 kg 770 g
#### Page No 12:
We know that 1 m = 100 cm
Length of the cloth = 5 m
Length of the piece cut off from the cloth = 2 m 85 cm
∴ Length of the remaining piece of cloth = 5 m ‒ 2.85 m
= 2.15 m = 2 m 15 cm
#### Page No 12:
We know that 1 m = 100 cm
Now, length of the cloth required to make one shirt = 2 m 75 cm
Length of the cloth required to make 16 such shirts = 2 m 75 cm $×$ 16
= 2.75 m $×$ 16
= 44 m
∴ The length of the cloth required to make 16 shirts will be 44 m.
#### Page No 12:
We know that 1 m = 100 cm
Cloth needed for making 8 trousers = 14 m 80 cm
Cloth needed for making 1 trousers = 14 m 80 cm ÷ 8
= 14 .8 m ÷ 8
= 1.85 m = 1 m 85 cm
∴ 1 m 85 cm of cloth will be required to make one shirt.
#### Page No 12:
We know that 1 kg = 1000 g
Now, mass of one brick = 2 kg 750 g
∴ Mass of 14 such bricks = 2 kg 750 g $×$ 14
= 2.75 kg $×$ 14
= 38.5 kg = 38 kg 500 g
#### Page No 12:
We know that 1 kg = 1000 g
Now, total mass of 8 packets of the same size = 10 kg 600 g
∴ Mass of one such packet = 10 kg 600 g ÷ 8
= 10.6 kg ÷ 8
= 1.325 kg = 1 kg 325 g
#### Page No 12:
Length of the rope divided into 8 equal pieces = 10 m
Length of one piece = 10 m ÷ 8
= 1.25 m = 1 m 25 cm [∵ 1 m = 100 cm]
#### Page No 14:
(i) In 36, the ones digit is 6 > 5.
∴ The required rounded number = 40
(ii) In 173, the ones digit is 3 < 5.
∴ The required rounded number = 170
(iii) In 3869, the ones digit is 9 > 5.
∴ The required rounded number = 3870
(iv) In 16378, the ones digit is 8 > 5.
∴ The required rounded number = 16380
#### Page No 14:
(i) In 814, the tens digit is 1 < 5.
∴ The required rounded number = 800
(ii) In 1254, the tens digit is 5 = 5
∴ The required rounded number = 1300
(iii) In 43126, the tens digit is 2 < 5
∴ The required rounded number = 43100
(iv) In 98165, the tens digit is 6 > 5
∴ The required rounded number = 98200
#### Page No 14:
(i) In 793, the hundreds digit is 7 > 5
∴ The required rounded number = 1000
(ii) In 4826, the hundreds digit is 8 > 5
∴ The required rounded number = 5000
(iii) In 16719, the hundreds digit is 7 > 5
∴ The required rounded number = 17000
(iv) In 28394, the hundreds digit is 3 < 5
∴ The required rounded number = 28000
#### Page No 14:
(i) In 17514, the thousands digit is 7 > 5
∴ The required rounded number = 20000
(ii) In 26340, the thousands digit is 6 > 5
∴ The required rounded number = 30000
(iii) In 34890, the thousands digit is 4 < 5
∴ The required rounded number = 30000
(iv) In 272685, the thousands digit is 2 < 5
∴ The required rounded number = 270000
#### Page No 14:
57 estimated to the nearest ten = 60
34 estimated to the nearest ten = 30
∴ The required estimation = (60 + 30) = 90
#### Page No 14:
43 estimated to the nearest ten = 40
78 estimated to the nearest ten = 80
∴ The required estimation = (40 + 80) = 120
#### Page No 14:
14 estimated to the nearest ten = 10
69 estimated to the nearest ten = 70
∴ The required estimation = (10 + 70) = 80
#### Page No 14:
86 estimated to the nearest ten = 90
19 estimated to the nearest ten = 20
∴ The required estimation = (90 + 20) = 110
#### Page No 14:
95 estimated to the nearest ten = 100
58 estimated to the nearest ten = 60
∴ The required estimation = (100 + 60) = 160
#### Page No 14:
77 estimated to the nearest ten = 80
63 estimated to the nearest ten = 60
∴ The required estimation = (80 + 60) = 140
#### Page No 14:
356 estimated to the nearest ten = 360
275 estimated to the nearest ten = 280
∴ The required estimation = (360 + 280) = 640
#### Page No 14:
463 estimated to the nearest ten = 460
182 estimated to the nearest ten = 180
∴ The required estimation = (460 + 180) = 640
#### Page No 14:
538 estimated to the nearest ten = 540
276 estimated to the nearest ten = 280
∴ The required estimation = (540 + 280) = 820
#### Page No 14:
236 estimated to the nearest hundred = 200
689 estimated to the nearest hundred = 700
∴ The required estimation = (200 + 700) = 900
#### Page No 14:
458 estimated to the nearest hundred = 500
324 estimated to the nearest hundred = 300
∴ The required estimation = (500 + 300) = 800
#### Page No 14:
170 estimated to the nearest hundred = 200
395 estimated to the nearest hundred = 400
∴ The required estimation = (200 + 400) = 600
#### Page No 15:
3280 estimated to the nearest hundred = 3300
4395 estimated to the nearest hundred = 4400
∴ The required estimation = (3300 + 4400) = 7700
#### Page No 15:
5130 estimated to the nearest hundred = 5100
1410 estimated to the nearest hundred = 1400
∴ The required estimation = (5100 + 1400) = 6500
#### Page No 15:
10083 estimated to the nearest hundred = 10100
29380 estimated to the nearest hundred = 29400
∴ The required estimation = (10100 + 29400) = 39500
#### Page No 15:
32836 estimated to the nearest thousand = 33000
16466 estimated to the nearest thousand = 16000
∴ The required estimation = (33000 + 16000) = 49000
#### Page No 15:
46703 estimated to the nearest thousand = 47000
11375 estimated to the nearest thousand = 11000
∴ The required estimation = (47000 + 11000) = 58000
#### Page No 15:
Number of balls in box A = 54
Number of balls in box B = 79
Estimated number of balls in box A = 50
Estimated number of balls in box B = 80
∴ Total estimated number of balls in both the boxes = (50 + 80) = 130
#### Page No 15:
We have,
53 estimated to the nearest ten = 50
18 estimated to the nearest ten = 20
∴ The required estimation = (50 ‒ 20) = 30
#### Page No 15:
100 estimated to the nearest ten = 100
38 estimated to the nearest ten = 40
∴ The required estimation = (100 ‒ 40) = 60
#### Page No 15:
409 estimated to the nearest ten = 410
148 estimated to the nearest ten = 150
∴ The required estimation = (410 ‒ 150) = 260
#### Page No 15:
678 estimated to the nearest hundred = 700
215 estimated to the nearest hundred = 200
∴ The required estimation = (700 ‒ 200) = 500
#### Page No 15:
957 estimated to the nearest hundred = 1000
578 estimated to the nearest hundred = 600
∴ The required estimation = (1000 ‒ 600) = 400
#### Page No 15:
7258 estimated to the nearest hundred = 7300
2429 estimated to the nearest hundred = 2400
∴ The required estimation = (7300 ‒ 2400) = 4900
#### Page No 15:
5612 estimated to the nearest hundred = 5600
3095 estimated to the nearest hundred = 3100
∴ The required estimation = (5600 ‒ 3100) = 2500
#### Page No 15:
35863 estimated to the nearest thousand = 36000
27677 estimated to the nearest thousand = 28000
∴ The required estimation = (36000 ‒ 28000) = 8000
#### Page No 15:
47005 estimated to the nearest thousand = 47000
39488 estimated to the nearest thousand = 39000
∴ The required estimation = (47000 ‒ 39000) = 8000
#### Page No 15:
38 estimated to the nearest ten = 40
63 estimated to the nearest ten = 60
∴ The required estimation = (40 $×$ 60) = 2400
#### Page No 15:
54 estimated to the nearest ten = 50
47 estimated to the nearest ten = 50
∴ The required estimation = (50 $×$ 50) = 2500
#### Page No 15:
28 estimated to the nearest ten = 30
63 estimated to the nearest ten = 60
∴ The required estimation = (30 $×$ 60) = 1800
#### Page No 15:
42 estimated to the nearest ten = 40
75 estimated to the nearest ten = 80
∴ The required estimation = (40 $×$ 80) = 3200
#### Page No 15:
64 estimated to the nearest ten = 60
58 estimated to the nearest ten = 60
∴ The required estimation = (60 $×$ 60) = 3600
#### Page No 15:
15 estimated to the nearest ten = 20
34 estimated to the nearest ten = 30
∴ The required estimation = (20 $×$ 30) = 600
#### Page No 16:
376 estimated to the nearest hundred = 400
123 estimated to the nearest hundred = 100
∴ The required estimation = (400 $×$ 100) = 40000
#### Page No 16:
264 estimated to the nearest hundred = 300
147 estimated to the nearest hundred = 100
∴ The required estimation = (300 $×$ 100) = 30000
#### Page No 16:
423 estimated to the nearest hundred = 400
158 estimated to the nearest hundred = 200
∴ The required estimation = (400 $×$ 200) = 80000
#### Page No 16:
509 estimated to the nearest hundred = 500
179 estimated to the nearest hundred = 200
∴ The required estimation = (500 $×$ 200) = 100000
#### Page No 16:
392 estimated to the nearest hundred = 400
138 estimated to the nearest hundred = 100
∴ The required estimation = (400 $×$ 100) = 40000
#### Page No 16:
271 estimated to the nearest hundred = 300
339 estimated to the nearest hundred = 300
∴ The required estimation = (300 $×$ 300) = 90000
#### Page No 16:
183 estimated upwards = 200
154 estimated downwards = 100
∴ The required product = (200 $×$ 100) = 20000
#### Page No 16:
267 estimated upwards = 300
146 estimated downwards = 100
∴ The required product = (300 $×$ 100) = 30000
#### Page No 16:
359 estimated upwards = 400
76 estimated downwards = 70
∴ The required product = (400 $×$ 70) =28000
#### Page No 16:
472 estimated upwards = 500
158 estimated downwards = 100
∴ The required product = (500 $×$ 100) = 50000
#### Page No 16:
680 estimated upwards = 700
164 estimated downwards = 100
∴ The required product = (700 $×$ 100) = 70000
#### Page No 16:
255 estimated upwards = 300
350 estimated downwards = 300
∴ The required product = (300 $×$ 300) = 90000
#### Page No 16:
356 estimated downwards = 300
278 estimated upwards = 300
∴ The required product = (300 $×$ 300) = 90000
#### Page No 16:
472 estimated downwards = 400
76 estimated upwards = 80
∴ The required product = (400 $×$ 80) = 32000
#### Page No 16:
578 estimated downwards = 500
369 estimated upwards = 400
∴ The required product = (500 $×$ 400) = 200000
#### Page No 16:
87 ÷ 28 is approximately equal to 90 ÷ 30 = 3.
#### Page No 16:
The estimated quotient for 83 ÷ 17 is approximately equal to 80 ÷ 20 = 8 ÷ 2 = 4.
#### Page No 16:
The estimated quotient of 75 ÷ 23 is approximately equal to 80 ÷ 20 = 8 ÷ 2 = 4.
#### Page No 16:
The estimated quotient of 193 ÷ 24 is approximately equal to 200 ÷ 20 = 20 ÷ 2 = 10.
#### Page No 16:
The estimated quotient of 725 ÷ 23 is approximately equal to 700 ÷ 20 = 70 ÷ 2 = 35.
#### Page No 16:
The estimated quotient of 275 ÷ 25 is approximately equal to 300 ÷ 30 = 30 ÷ 3 = 10.
#### Page No 16:
The estimated quotient of 633 ÷ 33 is approximately equal to 600 ÷ 30 = 60 ÷ 3 = 20.
#### Page No 16:
729 ÷ 29 is approximately equal to 700 ÷ 30 or 70 ÷ 3, which is approximately equal to 23.
#### Page No 16:
858 ÷ 39 is approximately equal to 900 ÷ 40 or 90 ÷ 4, which is approximately equal to 23.
#### Page No 16:
868 ÷ 38 is approximately equal to 900 ÷ 40 or 90 ÷ 4, which is approximately equal to 23.
#### Page No 19:
We may write these numbers as given below:
(i) 2 = II
(ii) 8 = (5 + 3) = VIII
(iii) 14 = (10 + 4) = XIV
(iv) 29 = ( 10 + 10 + 9 ) = XXIX
(v) 36 = (10 + 10 + 10 + 6) = XXXVI
(vi) 43 = (50 - 10) + 3 = XLIII
(vii) 54 = (50 + 4) = LIV
(viii) 61= (50 + 10 + 1) = LXI
(ix) 73 = ( 50 + 10 + 10 + 3) = LXXIII
(x) 81 = (50 + 10 + 10 + 10 + 1) = LXXXI
(xi) 91 =(100 - 10) + 1 = XCI
(xii) 95 = (100 - 10) + 5 = XCV
(xiii) 99 = (100 - 10) + 9 = XCIX
(xiv) 105 = (100 + 5) = CV
(xv) 114 = (100 + 10) + 4 = CXIV
#### Page No 19:
We may write these numbers in Roman numerals as follows:
(i) 164 = (100 + 50 + 10 + 4) = CLXIV
(ii) 195 = 100 + (100 - 10) + 5 = CXCV
(iii) 226 = (100 + 100 + 10 + 10 + 6) = CCXXVI
(iv) 341= 100 + 100+ 100 + (50 -10) + 1 = CCCXLI
(v) 475 = (500 - 100) + 50 + 10 + 10 + 5 = CDLXXV
(vi) 596 = 500 + (100 - 10) + 6 = DXCVI
(vii) 611= 500 + 100 + 11 = DCXI
(viii) 759 = 500 + 100 + 100 + 50 + 9 = DCCLIX
#### Page No 19:
We can write the given Roman numerals in Hindu-Arabic numerals as follows:
(i) XXVII = 10 + 10 + 7 = 27
(ii) XXXIV = 10 + 10 + 10 + 4 = 34
(iii) XLV = (50 − 10 ) + 5 = 45
(iv) LIV = 50 + 4 = 54
(v) LXXIV = 50 + 10 + 10 + 4 = 74
(vi) XCI = (100 − 10) + 1 = 91
(vii) XCVI = (100 − 10) + 6 = 96
(viii) CXI = 100 + 10 + 1= 111
(ix) CLIV = 100 + 50 + 4 = 154
(x) CCXXIV = 100 + 100 + 10 + 10 + 4 = 224
(xi) CCCLXV = 100 + 100 + 100 + 50 + 10 + 5 = 365
(xii) CDXIV = (500 − 100) + 10 + 4 = 414
(xiii) CDLXIV = (500 − 100) + 50 + 10 + 4 = 464
(xiv) DVI = 500 + 6= 506
(xv) DCCLXVI = 500 + 100 + 100 + 50 + 10 + 6 = 766
#### Page No 19:
(i) VC is wrong because V, L and D are never subtracted.
(ii) IL is wrong because I can be subtracted from V and X only.
(iii) VVII is wrong because V, L and D are never repeated.
(iv) IXX is wrong because X (ten) must be placed before IX (nine).
#### Page No 20:
Option c is correct.
Place value of 6 = 6 lakhs = (6 $×$ 100000) = 600000
#### Page No 20:
Option a is correct.
The face value of a digit remains as it is irrespective of the place it occupies in the place value chart.
Thus, the face value of 4 is always 4 irrespective of where it may be.
#### Page No 20:
Option c is correct.
Place value of 5 = 5 $×$ 10000 = 50000
Face value of 5 = 5
∴ Required difference = 50000 − 5 = 49995
#### Page No 20:
Option b is correct.
The smallest counting number is 1.
#### Page No 20:
Option b is correct.
The largest four-digit number = 9999
The smallest four-digit number = 1000
Total number of all four-digit numbers = (9999 − 1000) + 1
= 8999 + 1
= 9000
#### Page No 20:
Option b is correct.
The largest seven-digit number = 9999999
The smallest seven-digit number = 1000000
Total number of seven-digit numbers = (9999999 − 1000000) + 1
= 8999999 + 1
= 9000000
#### Page No 20:
Option c is correct.
The largest eight-digit number = 99999999
The smallest eight-digit number = 10000000
Total number of eight-digit numbers = (99999999 − 10000000) + 1
= 89999999 + 1
= 90000000
#### Page No 20:
Option b is correct.
The number just before 1000000 is 999999 (i.e., 1000000 − 1).
#### Page No 20:
Option a is correct.
V, L and D are never subtracted. Thus, VX is wrong.
#### Page No 20:
Option c is correct.
I can be subtracted from V and X only. Thus, IC is wrong.
#### Page No 20:
Option b is correct.
V, L and D are never repeated. Thus, XVV is meaningless.
#### Page No 21:
(i) Sixteen crore six lakh twenty-three thousand seven hundred eight
(ii) Fourteen crore twenty-three lakh eight thousand nine hundred fifteen
#### Page No 21:
(i) Eighty million sixty thousand four hundred nine
(ii) Two hundred thirty-four million one hundred fifty thousand three hundred nineteen
#### Page No 21:
We have,
864572 is a 6-digit number.
3903216 and 6940513 are seven-digit numbers.
At the ten lakhs place, one number has 3, while the second number has 6.
Clearly, 3 < 6
∴ 3903216 < 6940513
16531079 and 19430124 are eight-digit numbers.
At the crores place, both the numbers have the same digit, namely 1.
At the ten lakhs place, one number has 6, while the second number has 9.
Clearly, 6 < 9
∴ 16531079 < 19430124
The given numbers in ascending order are:
864572 < 3903216 < 6940513 < 16531079 < 19430124
#### Page No 21:
63240613 and 54796203 are both eight-digit numbers.
At the crores place, one number has 6, while the second number has 5.
Clearly, 5 < 6
∴ 63240613 > 54796203
5125648 and 4675238 are both seven-digit numbers.
However, at the ten lakhs place, one number has 5, while the second number has 4.
Clearly, 4 < 5
∴ 5125648 > 4675238
589623 is a six-digit number.
The given numbers in descending order are:
63240613 > 54796203 > 5125648 > 4675238 > 589623
#### Page No 21:
The largest seven-digit number = 9999999
The smallest seven-digit number = 1000000
Number of all seven-digits numbers = (9999999 − 1000000) + 1
= 899999 + 1
= 9000000
Hence, there is a total of ninety lakh 7-digit numbers.
#### Page No 21:
The largest number using each of the digits: 1, 4, 6, 8 and 0, is 86410.
The smallest number using each of the digits: 1, 4, 6, 8 and 0, is 10468.
∴ Required difference = 86410 − 10468
= 75942
#### Page No 21:
(i) CCXLII = 100 + 100 + (50 − 10) + 2 = 242
(ii) CDLXV = (500 − 100) + 50 + 10 + 5 = 465
(iii) LXXVI = 50 + 10 + 10 + 6 = 76
(iv) DCCXLI = 500 + 100 + 100 + ( 50 − 10) + 1 = 741
(v) XCIV = (100 − 10) + 4 = 94
(vi) CXCIX = 100 + (100 − 10) + 9 = 199
#### Page No 21:
(i) 84 = 50 + 30 + 4 = LXXXIV
(ii) 99 = 90 + 9 = XCIX
(iii) 145 = 100 + (50 − 10) + 5 = CXLV
(iv) 406 = 400 + 6 = CDVI
(v) 519 = 500 +10 + 9 = DXIX
#### Page No 21:
Successor of 999999 = 999999 + 1 = 1000000
Predecessor of 999999 = 999999 − 1 = 999998
∴ Required difference = 1000000 − 999998
= 2
#### Page No 21:
(i) The number is 1046. Its digit at the hundreds place is 0 < 5.
So, the given number is rounded off to the nearest thousand as 1000.
(ii) The number is 973. Its digit at the hundreds place is 9 > 5.
So, the given number is rounded off to the nearest thousand as 1000.
(iii) The number is 5624. Its digit at the hundreds place is 6 > 5.
So, the given number is rounded off to the nearest thousand as 6000.
(iv) The number is 4368. Its digit at the hundreds place is 3 < 5.
So, the given number is rounded off to the nearest thousand as 4000.
#### Page No 21:
Option (a) is correct.
X can be subtracted from L and C only.
i.e., XC = ( 100 − 10 ) = 90
#### Page No 21:
Option (b) is correct.
One lakh (100000) is equal to one hundred thousand (100,000).
#### Page No 21:
Option (b) is correct.
No Roman numeral can be repeated more than three times.
#### Page No 21:
Option (d) is correct.
Between 1 and 100, the digit 9 occurs in 9, 19, 29, 39, 49, 59, 69, 79, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98 and 99.
∴ The digit occurs 20 times between 1 and 100.
#### Page No 21:
Option (a) is correct.
7268 will be rounded off to the nearest hundred as 7300.
2427 will be rounded of to the nearest hundred as 2400.
∴ 7300 − 2400 = 4900
#### Page No 21:
Option (b) is correct.
1 million (1,000,000) = 10 lakh (10 $×$ 1,00,000)
#### Page No 21:
Option (b) is correct.
The number is 1512. Its digit at the tens place is 1 < 5.
So, the given number is rounded off to the nearest hundred as 1500.
#### Page No 21:
Option (c) is correct.
In Roman numerals, V, L and D are never repeated and never subtracted.
#### Page No 21:
Periods: Crores Lakhs Thousands Hundreds Tens Ones
Digits: 8 63 24 8 0 5
Using commas, we write the given number as 8,63,24,805.
#### Page No 21:
(i) 1 crore = 100 lakh
(ii) 1 crore = 10 million
(iii) 564 when estimated to the nearest hundred is 600.
(iv) The smallest 4-digit number with four different digits is 1023.
#### Page No 22:
F
Place value of 5 in 85419 = 5000
Face value of 5 in 85419 = 5
∴ Their difference = 5000 − 5 = 4995
#### Page No 22:
T
In Roman numerals, V, L and D are never repeated and never subtracted.
#### Page No 22:
T
Greatest five-digit number = 99999
Successor of 99999 = 99999 + 1 = 100000
#### Page No 22:
T
The number is 46,530. Its digit at the tens place is 3 < 5.
So, the number 46,530 is rounded off to the nearest hundred as 46,500. |
proofpile-shard-0030-67 | {
"provenance": "003.jsonl.gz:68"
} | “But if the watchman see the sword come, and blow not the trumpet, and the people be not warned;
if the sword come, and take any person from among them, he is taken away in his iniquity;
but his blood will I require at the watchman's hand."
Ezekiel 33:6
"A righteous man falling down before the wicked is as a troubled fountain, and a corrupt spring."
Proverbs 25:26
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
### Still don't think Obama is a Muslim?
For all the naysayers, it's hard to argue with the evidence!
Why did Obama smile as if he knew the Arabic, BEFORE the translation was given by Bergdahl's father?
(click below to view the video) |
proofpile-shard-0030-68 | {
"provenance": "003.jsonl.gz:69"
} | # How many 3 digit even numbers are there(No Repetition)?
First find numbers ending with 0
So, 1's place-1 10's place-9 100's place-7 (2 digits are already consumed and 0 can't be used)
So 7*9*1.Im i doing the right thing?
• Why do you say "2 digits are already consumed"? Does the problem state that the digits must be distinct? – ShreevatsaR Feb 16 '14 at 10:55
• @ShreevatsaR Yes,Edited question title – techno Feb 16 '14 at 10:56
• (1) You need to consider other possible last digits too, not just 0. (2) You're saying that for the 100s place you can't have the other 2 digits, nor can you have 0. This is correct, but that's not necessarily 3 digits ruled out: it depends on whether 0 is one of the last 2 digits. – ShreevatsaR Feb 16 '14 at 11:02
First find the even numbers that are ending up in zero so, for no. ending up with zero are zero at one's place so 1 combination now 9 at hundreds and 8 at tens Now even numbers not ending with zero i.e. ending in 2,4,6,8 so 4 at one's place only one is used so 8 at hundreds place since 0 can not be used and one number is 0 and now 8 are left for the tens place so, total no. of digit sequences would be =>9*8*1+8*8*4=72+256=328 When zero is used up there is no problem for using up tens or hundreds but when zero is not used up after putting one of 2,4,6,8 numbers at one's place we have to check if 0 is ending up in the most significant place or not. To avoid that we start with hundredth place. So we ensure this first.
• +1 :) this the only answer that matches with the solution in my book – techno Feb 16 '14 at 11:08
• Why do you go for the 100's place after filling the Unit's place?Rather than going to the ten's place? – techno Feb 16 '14 at 11:17
Units digit can be among 0,2,4,6,8.
CASE A: If 0 is at units place, No. of terms possible is 1x9x8 = 72
CASE B: If 0 is not at units place but at hundred's place No of terms possible is 4x1(0)x8 = 32 CASE C: If 0 is not there at all, No of terms possible is: 4x8x7 = 224
total no's = 224+32+72 = 328
• There cannot be a zero in the hundreds place since you would then not have a three-digit number. It looks like you meant a zero in the tens place. – N. F. Taussig Nov 6 '15 at 11:32
\begin{align} & \underline{\text{case 1}}\text{ : All possible even 3-digit numbers are given by = {${5*9*8 =360}$} } \\ & \underline{\text{case 2}}\text{ : All even number with ''0'' at }hundreds\text{ }place\text{ are given by = {${4*8*1=32}$} } \\ & \underline{The\text{ number of even 3 digit is = case1 -case 2 = {360-32=328}} } \end{align}
Total three digits numers are 900
We have total 10 digits which are 0,1,2,3,4,5,6,7,8,9
But a leading zero does not have any value so we cant have zero at 100th place
So for 100th place we have 9 digits, for tenth place we have 10 digits and for units place we have 10 digits
So total 3 digits number are 9*10*10 =900
If we talk about even numbers then we have only 5 digits fot units place 0,2,4,6,8 So total 3 digits even numbers would be 9*10*5 =450
Here we consider numbers of the form xyz, where each of x, y, z represents a digit under the given restrictions. Since xyz has to be even, z has to be 0, 2, 3, 4, 6, or 8.
If z is 0, then x has 9 choices.
If z is 2, 4, 6 or 8 (4 choices) then x has 8 choices. (Note that x cannot be zero)
Therefore, z and x can be chosen in (1 × 9) + (4 × 8) = 41 ways. For each of these ways, y can be chosen in 8 ways.
Hence, the desired number is 41 × 8 = 328 numbers 3-digit even numbers exist with no repetitions.
On the number line the 3 digit numbers are : 100 - 999 So if I would start at 1 - 999 then I have 999 numbers in total. from those I will take away the one digit numbers : 9 and the two digit numbers: 90 I.e: 999-99 = 900 3 digit numbers.
Now to count the even numbers: we start at 100 and skip count by 2. Or perhaps we can divide by 2: 900/2 = 450 even numbers.
Please let me know if you agree.
The total 3 digit numbers are 999 including preceding zeros and there are 999/2 even numbers. so total three digit even numbers are 499.
• $100$ itself is a 3 digit number so i believe that there are $1000$ 3 digit numbers, not $999$ – Yiyuan Lee Feb 16 '14 at 11:08 |
proofpile-shard-0030-69 | {
"provenance": "003.jsonl.gz:70"
} | # NCERT Solutions Class 10 Science Chapter 3
## NCERT Solutions for Class 10 Science Chapter 3
NCERT Class 10 Science Chapter 3 teaches students everything about metal and non-metals. The occurrence of every metal is explained to students, and they also learn about the distinguishing properties of metals and non-metals. To improve their understanding of Chapter 3, students should not skip attempting questions given at the end of the chapter. They can also refer to NCERT Solutions for Class 10 Science Chapter 3 to get help in solving the questions accurately.
NCERT Solutions for Class 10 Science Chapter 3 are prepared by subject matter experts, so students do not have to worry about the authenticity of the study material. Answers to every question given in the textbook are included in NCERT Solutions for Class 10 Science Chapter 3
NCERT Solutions for Class 10 Science Chapter 2 can be your practice guide or last-minute revision material. Students can access these solutions on the website or the Extramarks app.
## NCERT Solutions for Class 10 Science Chapter 3 - Metals and Non-metals
Every element can be broadly classified into two types – Metals & Non-Metals. Other than mercury, all other elements are solid at room temperature. Being solid is one of the most vital properties to qualify as a metal. Other properties that make an element, a metal are:
• Metals are malleable & ductile
• Metals are lustrous
• Metals are also good conductors of heat & electricity
• Metal are sonorous
• Metals lose their electrons to form positively charged ions
• Metals chemically react with oxygen to transform into basic oxides
On the other hand, elements that are classified as non-metals are usually not malleable or ductile. They are also not lustrous, except for Iodine. Other than graphite, all the non-metals are bad conductors of electricity as well as heat. Non-metals also gain electrons to form negatively charged ions. Just like metals dissolve in water to form basic oxides, non-metals dissolve in water to form acidic oxide.Students might find Chapter 3 overwhelming because of the large number of new concepts that it covers. Thus, practising NCERT textbook questions is important to ensure that they understand the topics in-depth.
Referring to NCERT Solutions for Class 10 Science Chapter 3 can help students solve textbook questions with accuracy. The use of relevant examples in the answers make it easier for students to understand the applied logic. As the solutions are prepared by subject matter experts at Extramarks, students can be assured of 100 percent accuracy of the answers.
What makes NCERT Solutions stand out is the fact that they are drafted in a very simple and easy to understand language. Irrespective of the student’s level, they will be able to understand every concept and answer any question easily. is, NCERT Solutions for Class 10 Science
Even if you ace Class 10 Chapter 3 Science with NCERT Solutions for Class 10 science Chapter 3, you still have a long way to go. With Extramarks’ NCERT Solutions for Class 10 Science, you will be able to practise textbook exercises of every chapter. This means, no more nervousness and stress during exams.
In addition, Extramarks also offers NCERT Solutions for Class 10 Science for all the chapters. Check out chapter-wise NCERT Solutions for Class 10 Science below:
Chapter 1 - Chemical Reactions and Equations
Chapter 2 - Acids, Bases And Salts
Chapter 3 - Metals and Non-metals
Chapter 4 - Carbon and Its Compounds
Chapter 5 - Periodic Classification of Elements
Chapter 6 - Life Processes
Chapter 7 - Control and Coordination
Chapter 8 - How do Organisms Reproduce?
Chapter 9 - Heredity and Evolution
Chapter 10 - Light Reflection and Refraction
Chapter 11 - Human Eye and Colourful World
Chapter 12 - Electricity
Chapter 13 - Magnetic Effects of Electric Current
Chapter 14 - Sources of Energy
Chapter 15 - Our Environment
Chapter 16 - Sustainable Management of Natural Resources
## Benefits of NCERT Solutions for Class 10 Science
NCERT Solutions for Class 10 Science by Extramarks have one clear goal which is to get you those ‘extra marks’ that you would otherwise miss out on. Here are a few other benefits of Extramarks’ NCERT Solutions for Class 10 Science Chapter 3:
• The solutions are prepared by subject experts who have years of experience in teaching.
• All the answers are stated stepwise so that students can retain them in their mind for a long time.
• Every answer of every chapter in NCERT Solutions for Class 10 Science is written as per the CBSE guidelines.
• As the explanations are comprehensive, the fundamentals of the students get better.
## How Will Extramarks Study Materials Benefit Students?
Most students aspire to score high marks in the Class 10 Board Examination. But only a few put in the hard work for it. And from the ones who do, a lot of them still fail to secure marks as high as they expected. Let’s look at the ways how study materials by Extramarks can help students:
• Whether you are looking for mock tests, past years’ question papers or sample papers, you will find it all on Extramarks.
• A team of subject matter experts has prepared the study material while ensuring that it is highly accurate and easy to understand.
• All the study materials available on Extramarks are as per the latest guidelines by CBSE.
## Related Question
1. Give an example of a metal which
• Is a liquid at room temperature
• Can be easily cut with a knife
• Is the best conductor of heat
• Is a poor conductor of heat
Solution:
• Mercury is a metal that is liquid at room temperature
• Sodium and potassium can be easily cut with a knife
• Silver is the best conductor of heat
• Mercury is a poor conductor of heat
Q.1 Which of the following pairs will give displacement reactions?
(a) NaCl solution and copper metal
(b) MgCl2 solution and aluminium metal
(c) FeSO4 solution and silver metal
(d) AgNO3 solution and copper metal.
Ans-
(d) AgNO3 solution and copper metal
Q.2 Which of the following methods is suitable for preventing an iron frying pan from rusting?
(a) Applying grease
(b) Applying paint
(c) Applying a coating of zinc
(d) All of the above.
Ans-
(c) Applying a coating of zinc
Q.3 An element reacts with oxygen to give a compound with a high melting point. This compound is also soluble in water. The element is likely to be:
Ans-
(a) calcium
(b) carbon
(c) silicon
(d) iron.
(a) Calcium
Q.4 Food cans are coated with tin and not with zinc because
(a) Zinc is costlier than tin.
(b) Zinc has a higher melting point than tin.
(c) Zinc is more reactive than tin.
(d) Zinc is less reactive than tin.
Ans-
(c) Zinc is more reactive than tin which may react with food items and make it unfit for health.
Q.5 You are given a hammer, a battery, a bulb, wires and a switch.
(a) How could you use them to distinguish between samples of metals and non-metals?
(b) Assess the usefulness of these tests in distinguishing between metals and non-metals.
Ans-
1. If a substance can be beaten into thin sheets with the help of a hammer then it is a metal, whereas if it gets broken into pieces then it is non-metal. We can use the battery, bulb, wires, and a switch to set up a circuit with the sample. If the sample conducts electricity and bulbs starts to glow, then it is a metal otherwise it is a non-metal.
2. When a substance fulfills both the criteria then it can be confirmed as a metal. We know that there are some exceptions also for example sodium is metal which is not malleable in fact it is brittle. Graphite, a non-metal (allotrope of carbon) is a good conductor of electricity. Hence, either of the tests cannot confirm a metal or non-metal; when the test is done in isolation.
Q.6 What are amphoteric oxides? Give two examples of amphoteric oxides.
Ans-
Amphoteric oxides are the oxides, which react with both acids and bases to form salt and water. Examples: Zinc oxide (ZnO) and Aluminium oxide (Al2O3)
Q.7 Name two metals which will displace hydrogen from dilute acids, and two metals which will not.
Ans-
Metals that are more reactive than hydrogen displace it from dilute acids. For example, sodium and potassium displace hydrogen from dilute acids. On the other hand less reactive metals like copper, silver do not displace hydrogen from dilute acids.
Q.8 In the electrolytic refining of a metal M, what would you take as the anode, the cathode and the electrolyte?
Ans-
In the electrolytic refining of a metal M:
Anode is impure, thick block of metal M
Cathode is thin strip or wire of pure metal M
Electrolyte is salt solution of metal M to be refined
Q.9 Pratyush took sulphur powder on a spatula and heated it. He collected the gas evolved by inverting a test tube over it, as shown in figure below.
(a) What will be the action of gas on:
(i) dry litmus paper?
(ii) moist litmus paper?
(b) Write a balanced chemical equation for the reaction taking place.
Ans-
(a) When sulphur is burnt in air then sulphur dioxide gas is formed.
(i) Sulphur dioxide gas has no action on dry litmus paper.
(ii) Sulphur dioxide gas turns moist blue litmus paper red because sulphur dioxide reacts with moisture to form sulphurous acid.
$\text{(b) S}\left(\text{s}\right)+{\text{O}}_{2}\left(\text{g}\right)\text{ }\to \text{ }\underset{\text{sulphur dioxide}}{{\text{SO}}_{2}\left(\text{g}\right)}$
Q.10 State two ways to prevent the rusting of iron.
Ans-
The two ways by which rusting of iron can be prevented are:
1. By oiling, greasing or painting the surface becomes waterproof and the moisture and oxygen present in the air cannot come into direct contact with iron. Hence, rusting is prevented.
2. By Galvanization: In this method an iron article is coated with a layer of zinc metal, which prevents the iron from coming in contact with oxygen and moisture. Hence, rusting is prevented.
Q.11 What type of oxides are formed when non-metals combine with oxygen?
Ans-
Non-metals combine with oxygen to form acidic oxides or neutral oxides. Examples of acidic oxides are SO2, CO2 etc. and examples of neutral oxides are NO, CO etc.
Q.12 Give reasons:
(a) Platinum, gold and silver are used to make jewellery.
(b) Sodium, potassium and lithium are stored under oil.
(c) Aluminium is a highly reactive metal, yet it is used to make utensils for cooking.
(d) Carbonate and sulphide ores are usually converted into oxides during the process of extraction.
Ans-
1. Platinum, gold, and silver are used to make jewellery because they are very less reactive metals. Also they are lustrous and do not corrode easily.
2. Sodium, potassium, and lithium are very reactive metals. They react vigorously with air as well as water; therefore, they are kept immersed in kerosene oil in order to prevent their contact with air and moisture.
3. Aluminium is a highly reactive metal and is resistant to corrosion. This is because aluminium reacts with oxygen present in air to form a thin layer of aluminium oxide. This oxide layer is very stable and prevents further reaction of aluminium with oxygen. Also, it is light in weight and a good conductor of heat. Hence, it is used to make cooking utensils.
4. Carbonate and sulphide ores are usually converted into oxides during the process of extraction because it is easier to obtain metals from their oxides as compared to their carbonates and sulphides.
Q.13 You must have seen tarnished copper vessels being cleaned with lemon or tamarind juice. Explain why these sour substances are effective in cleaning the vessels.
Ans-
Copper reacts with moist carbon dioxide in air to form copper carbonate and as a result, copper vessel loses its shiny brown surface forming a green layer of copper carbonate. The sour substances like lemon or tamarind contain citric acid that neutralises the basis copper carbonate and dissolves the layer. That is why; tarnished copper vessels are cleaned with lemon or tamarind juice to give the surface of the copper vessel its characteristic lustre.
Q.14 Differentiate between metal and non-metal on the basis of their chemical properties.
Ans-
Metals Non-metals Metals are electropositive. They lose electron readily to form a cation. Non metals are electronegative. They gain electron readily to form anion. Metals are lustrous. Non-metals are non-lustrous except graphite. Metals are good conductors of heat and electricity. Non-metals are non-conductors of heat and electricity except graphite. Metals react with oxygen to form basic oxide. $4\text{Na}+\text{O }\to \text{ }2{\text{Na}}_{2}\text{O}$ These have ionic bond. Non-metals react with oxygen to form acidic or neutral oxide oxides. $\begin{array}{l}2\text{C}+{\text{O}}_{2}\text{ }\to \text{ }2\text{CO (neutral oxide)}\\ \text{C}+{\text{O}}_{2}\text{ }\to {\text{ CO}}_{2}\text{ (acidic oxide)}\end{array}$ These have covalent bond. Metals react with water to form oxides and hydroxides. Some metals react with cold water, some with hot water, and some with steam. $2\text{Na}+2{\text{H}}_{2}\text{O }\to \text{ }2\text{NaOH}+{\text{H}}_{2}↑$ They do not react with water. Metals react with dilute acids to form a salt and evolve hydrogen gas. However, Cu, Ag, Au, Pt, Hg do not react. $2\text{Na}+2\text{HCl }\to \text{ }2\text{NaCl}+{\text{H}}_{2}↑$ Non-metals do not react with dilute acids. These are not capable of replacing hydrogen. Metals act as reducing agents as they can easily lose electrons. $\text{Na }\to {\text{ Na}}^{+}+{\text{e}}^{-}$ Non-metals act as oxidizing agents as they can gain electrons. ${\text{Cl}}_{2}+2{\text{e}}^{-}\text{ }\to \text{ }2{\text{Cl}}^{-}$
Q.15 A man went door to door posing as a goldsmith. He promised to bring back the glitter of old and dull gold ornaments. An unsuspecting lady gave a set of gold bangles to him which he dipped in a particular solution. The bangles sparkled like new but their weight was reduced drastically. The lady was upset but after a futile argument the man beat a hasty retreat. Can you play the detective to find out the nature of the solution he had used?
Ans-
The solution he had used was Aqua regia which is the mixture of concentrated hydrochloric acid and concentrated nitric acid in the ratio of 3:1. It is a fuming, highly corrosive liquid that is capable of dissolving metals like Gold and Platinum. Since the outer layer of the gold bangles is dissolved in aqua regia so their weight was reduced drastically.
Q.16 Give reasons why copper is used to make hot water tanks and not steel (an alloy of iron).
Ans-
Copper is used to make hot water tanks and not steel because copper does not react with cold water, hot water or steam. However, iron reacts with steam. If the hot water tanks are made of steel (an alloy of iron), then iron would react vigorously with the steam formed from hot water.
$3\text{Fe}+4{\text{H}}_{2}\text{O }\to {\text{ Fe}}_{3}{\text{O}}_{4}+4{\text{H}}_{2}$
That is why copper is used to make hot water tanks, and not steel.
##### FAQs (Frequently Asked Questions)
1. What are some things to remember while learning from NCERT Solutions for Class 10 Science Chapter 3?
NCERT Solutions for Class 10 Science Chapter 3 is a guide developed by subject matter experts to make textbook exercises easy to solve, learn and retain and finally to achieve good academic results. . If you want to make the most out of Class 10 science Chapter 3 NCERT Solutions, here are a few things to remember:
• Do not start studying from it at the last minute. Keep your precious last hours for revision only.
• Do not panic if you fail to understand a certain concept. Our experts are always there to help you via our website and app.
• Despite the easy language, it is normal to still find a certain concept or equation solving difficult. It is okay, understand that practise makes a student perfect!
2. Will I learn anything useful from NCERT Solutions for Class 10 Science Chapter 3?
NCERT Solutions for Class 10 Science Chapter 3 is more than just study material for your CBSE Class 10 Board Examination. From this, you will actually learn a lot of things that will be useful in your practical life as well as in your career. The answers in NCERT solutions are explained in detail, which give students an idea of how to attempt a question in the board exam in the right manner. By closely studying from NCERT Solutions, students can increase their chances of scoring higher marks in board exams.
3. Is Class 10 Science Chapter 3 an important chapter in CBSE Class 10 Science Board Examination?
Yes, Class 10 Science Chapter 3 is an important chapter in CBSE Class 10 Science Board Examination. To ensure that you answer every question from it correctly, refer to NCERT Solutions for Class 10 Science Chapter 3 by Extramarks’ on its website or app. |
proofpile-shard-0030-70 | {
"provenance": "003.jsonl.gz:71"
} | # OpenStax-CNX
You are here: Home » Content » Elementary Statistics » Review
## Navigation
### Table of Contents
• Student Welcome Letter
• 13. Tables
### Recently Viewed
This feature requires Javascript to be enabled.
Inside Collection (Textbook):
Textbook by: Kathy Chu, Ph.D.. E-mail the author
# Review
Module by: Kathy Chu, Ph.D.. E-mail the author
The next three problems refer to the following situation: Suppose that a sample of 15 randomly chosen people were put on a special weight loss diet. The amount of weight lost, in pounds, follows an unknown distribution with mean equal to 12 pounds and standard deviation equal to 3 pounds.
## Exercise 1
To find the probability that the average of the 15 people lose no more than 14 pounds, the random variable should be:
• A. The number of people who lost weight on the special weight loss diet
• B. The number of people who were on the diet
• C. The average amount of weight lost by 15 people on the special weight loss diet
• D. The total amount of weight lost by 15 people on the special weight loss diet
C
## Exercise 2
Find the probability asked for in the previous problem.
0.9951
## Exercise 3
Find the 90th percentile for the average amount of weight lost by 15 people.
### Solution
12.99
The next five problems refer to the following study: Twenty percent of the students at a local community college live in within five miles of the campus. Thirty percent of the students at the same community college receive some kind of financial aid. Of those who live within five miles of the campus, 75% receive some kind of financial aid.
## Exercise 4
Find the probability that a randomly chosen student at the local community college does not live within five miles of the campus.
• A. 80%
• B. 20%
• C. 30%
• D. Cannot be determined
A
## Exercise 5
Find the probability that a randomly chosen student at the local community college lives within five miles of the campus or receives some kind of financial aid.
• A. 50%
• B. 35%
• C. 27.5%
• D. 75%
B
## Exercise 6
Based upon the above information, are living in student housing within five miles of the campus and receiving some kind of financial aid mutually exclusive?
• A. Yes
• B. No
• C. Cannot be determined
B
## Exercise 7
The interest rate charged on the financial aid is _______ data.
• A. quantitative discrete
• B. quantitative continuous
• C. qualitative discrete
• D. qualitative
B
## Exercise 8
What follows is information about the students who receive financial aid at the local community college.
• 1st quartile = $250 • 2nd quartile =$700
• 3rd quartile = $1200 (These amounts are for the school year.) If a sample of 200 students is taken, how many are expected to receive$250 or more?
• A. 50
• B. 250
• C. 150
• D. Cannot be determined
### Solution
• C. 150
The next two problems refer to the following information: P ( A ) = 0 . 2 P ( A ) = 0 . 2 size 12{P $$A$$ =0 "." 2} {} , P ( B ) = 0 . 3 P ( B ) = 0 . 3 size 12{P $$B$$ =0 "." 3} {} , A A size 12{A} {} and B B size 12{B} {} are independent events.
• A. 1
• B. 0
• C. 0.40
• D. 0.0375
B
## Content actions
PDF | EPUB (?)
### What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
### Downloading to a reading device
For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.
| More downloads ...
#### Module as:
PDF | More downloads ...
### Add:
#### Collection to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
#### Module to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks |
proofpile-shard-0030-71 | {
"provenance": "003.jsonl.gz:72"
} | # 3. Debug Mode¶
(Want to just log errors and stack traces? See Application Errors)
The flask script is nice to start a local development server, but you would have to restart it manually after each change to your code. That is not very nice and Flask can do better. If you enable debug support the server will reload itself on code changes, and it will also provide you with a helpful debugger if things go wrong.
To enable debug mode you can export the FLASK_DEBUG environment variable before running the server:
$export FLASK_DEBUG=1$ flask run
(On Windows you need to use set instead of export).
This does the following things:
1. it activates the debugger
2. it activates the automatic reloader
3. it enables the debug mode on the Flask application.
There are more parameters that are explained in the Development Server docs.
Attention
Even though the interactive debugger does not work in forking environments (which makes it nearly impossible to use on production servers), it still allows the execution of arbitrary code. This makes it a major security risk and therefore it must never be used on production machines.
Screenshot of the debugger in action:
Have another debugger in mind? See Working with Debuggers. |
proofpile-shard-0030-72 | {
"provenance": "003.jsonl.gz:73"
} | # How to close the (local) Debug window/palette
I have this local debug window that is so annoying
How can I hide it? What is the purpose of it?
-
## migrated from meta.mathematica.stackexchange.comJun 14 '12 at 14:12
This question came from our discussion, support, and feature requests site for users of Mathematica.
Yes, you can. As a general rule, when dealing with frontend questions don't forget to specify your version and platform – belisarius Jun 14 '12 at 13:32
You can turn it on/off with the menu entry Evaluation > Debugger – Heike Jun 14 '12 at 13:34
@BrettChampion I wasn't paying attention and answered without realising that we're on meta. – Heike Jun 14 '12 at 13:41
– belisarius Jun 14 '12 at 14:20
Easier to mistype than to misprintscreen – Rojo Jun 14 '12 at 15:22
(Or uncheck Debugger to turn off the entire apparatus.) |
proofpile-shard-0030-73 | {
"provenance": "003.jsonl.gz:74"
} | # Name each of the following complexes. Make certain your answer identifies the correct structural or geometric isomer (do not worry about optical isomerism for this question)?
$A .$ $\text{Pentaamminebromorhodium(III)}$, for ${\left[R h {\left(N {H}_{3}\right)}_{5} B r\right]}^{2 +}$
$B .$ $\text{Tetraamminezinc(II) nitrate}$ |
proofpile-shard-0030-74 | {
"provenance": "003.jsonl.gz:75"
} | # Astrophysics/Cosmology question - redshift
1. Oct 24, 2011
### tourjete
1. The problem statement, all variables and given/known data
Our universe is observed to be flat, with density parameters Ωm,0 = 0.3 in non-relativistic matter and Ω$\Lambda$,0 = 0.7 in dark energy at the present time. Neglect the contribution from relativistic matter.
At what redshift did the expansion of the universe start to change from deceleration from acceleration?
2. Relevant equations
a(t=change) = 0
a(t=now) = 1 (? is this convention?)
((da/dt) * (1/a))2 = H02 E2(z) where E2(z) = Ω$\Lambda$,0 + (1-Ω0)(1+z)2 + Ωm,0(1+z)3
z = a0/a(t) -1 where a0 denotes the present time
3. The attempt at a solution
First I thought about setting (da/dt)*(1/a) equal to zero and solving the equation for z. However, this gives a redshift of -2.32 which doesn't really make sense as the negative value implies that there is a blueshift (right?)
Upon further thought, I realized that a = 0 means that the lefthand side of the equation should be undefined / go to infinity... so could z be infinity? This doesn't really make sense to me either but it's all I've got.
I can manipulate the other equation I found to also give z=infinity by setting a(t) equal to zero.
I'm sure I'm forgetting some important concept but that's what I've got so far. |
proofpile-shard-0030-75 | {
"provenance": "003.jsonl.gz:76"
} | # Determining the number of maximum digits with coincidence restriction using Pigeonhole Principle
Given set $$S$$ that contains all numbers of base 3 that share the following characteristics:
• every element in $$S$$ share the same number of digits, $$x$$
• every element in $$S$$ can have leading zeros
We aim to solve the optimization problem
$$\max\{x\geq 2 \mid \exists \{n_1,\ldots,n_6\}\subset S: i\neq j \implies n_i, n_j \text{ have at most one coincidence}\}$$
(Coincidence can be defined as a number of duplicate number in the same digit. 2010 and 1220, for instance, has a coincidence of one.)
I manually listed out the elements while increasing $$x$$ by 1, but soon realized that this is getting too tedious. I proved by trial and error that when $$x=4$$, the conditions holds as $$S=\{0120, 1200, 2010, 0001, 1111, 2221\}$$ works. I also suspect that when $$x=5$$, the conditions does not hold (also done by trial and error), but I'd like to know if there is a mathematical way to confirm that.
I originally planned on using the Pigeonhole Principle, but am quite unsure as to how to utilize it for the proof. Any insight is appreciated.
In $$5$$ digits, you can find a set of size 6 with the coincidence property:
$$\{00011, 01102, 10120, 12212, 21221, 22000\}.$$
In $$6$$ the best you can do is size 4: e.g.,
$$\{000111,011022,102202,220000\}.$$
## Claim: $$x = 5$$
We'll show that a set of size four is the best you can do with six digits.
One can appeal to symmetry to simplify the situation: The coincidence between two numbers is not affected if you
• reorder the digits (of both numbers in the same way), or
• apply some bijection of $$\{1,2,3\}$$ to one of the digits (to both numbers simultaneously).
(E.g., the coincidence of 112 and 110 is the same as 211 and 011, and is the same as 111 and 211.)
Let $$S$$ denote a set of 6-digit numbers who pairwise share at most one digit.
Without loss of generality, applying symmetries to each of the numbers of $$S$$ you can assume that one of your numbers in $$S$$ is $$0^6 := 000000$$, and the other is
• $$01^5 := 011111$$, or
• $$\phantom0 1^6:=111111$$,
and we have two cases depending on this second value.
## Case 1: $$1^6$$
If $$m,n \in S$$ share at most one digit with $$0^6$$ and $$1^6$$, then $$m,n$$ can have at most one $$0$$-digit and one $$1$$-digit, and hence each have at least four $$2$$-digits . But then $$m$$ and $$n$$ share at least two $$2$$-digits (pidgeonhole!), so must be equal.
I.e., if $$S$$ contains two numbers which have coincidence $$0$$, $$S$$ has cardinality at most 3.
## Case 2: $$01^5$$
(This is the first rigorous treatment of this case I could find. Brute force is probably easier from this point!)
In this harder case, if $$m, n, p \in S$$ share at most $$1$$ digit with $$0^6$$ and $$01^5$$, they each contain at most one $$0$$-digit and two $$1$$-digits, i.e., at least three $$2$$-digits.
Moreover, to avoid the result of the the last case, at least two of these numbers $$m$$ and $$n$$ say, must have exactly three $$2$$-digits, which implies that their first digit must be a $$0$$ or a $$1$$, and the three $$2$$-digits lie in the other five places.
Some pigeonhole principle style thinking says: these $$2$$-digits for $$m$$ and $$n$$ must collectively occupy all five of these spaces, or else they would agree in two or more places.
If they do occupy all five slots, whether or not $$p$$ has three $$2$$-digits or more, we find that at least three of these have to be allocated to the same five spaces, and there is no way to avoid agreeing with $$m$$ or $$p$$ in fewer than two digits (pigeonhole principle on two slots: those occupied by $$m$$, and those occupied by $$n$$ or both.)
That is to say, at least some of the $$m,n,p$$ coincide in at least two places and are equal. That is, $$S$$ cannot contain more than four elements. |
proofpile-shard-0030-76 | {
"provenance": "003.jsonl.gz:77"
} | October 30, 2006
Old hacker t-shirts.
packetstorm-front.jpg
Years of being associated with the hacker community has led me to accumulate a number of t-shirts. Elias Levy suggested that I should take pictures of them before they end up being turned into an art project.
Even more old hacker shirts.
Elias is also contributing pictures of his shirts to the OldHackerShirts tag. We must preserve these invaluable pieces of our shared heritage, even if they are sometimes horribly sweat-stained*.
* I can only attest to the state of my shirts.
November 16, 2006
Punditry, PatchGuard, and Diversity
I, like many other members of the security community, have been thinking about the PatchGuard architecture that will be implemented in Vista for the past few weeks. I resisted blogging (erg) about it because I don't want to sound like a pompous ass, but might as well get my thoughts down on the subject rather than have them rattle around.
PatchGuard is essentially Microsoft's method for handling the volume of malware in the wild. Hooking kernel calls will become far more difficult, device drivers will have to be signed, and software that traditionally requires access to non-userland features, like firewalls and AV tools, will have to go through APIs standardized out of Redmond.
Obviously, this move raised the hackles from the traditional consumer AV organizations. Any technologic edge that one had over the other that involved interfacing with the kernel, and possibly preventing more malicious software, have been eliminated. If one of the third party vendors requires an avenue into the kernel that is not provided, they have to make a formal request to Microsoft for the API feature and wait for a subsequent Service Pack to provide it.
Normalizing access to the kernel is a "good thing" from an architecture standpoint. Microsoft can't hope to manage security threats in Windows unless it reduces the attack surface, or the number of possible entry points that can be used by an attacker. Third party vendors, however, face compression their margins as Microsoft enters the space and technological innovation in this critical area is standardized across the industry.
At face value, this leaves us with the consumer-grade security products industry on the ropes and a vastly more secure operating system, all because of interface standardization. An opposing view comes forth when we consider the issue of "software diversity". This discipline, which I spent a fair bit of time studying, asserts that populations of systems are more secure when they are "different", or do not share common faults. In non-infosec terms, this is equivalent to diversifying a financial portfolio to reduce the risk of loss associated with correlated securities. By standardizing all security software to essentially the same kernel interface, a new common fault, and a new target, is introduced. We won't know until Vista is widely deployed if the drop in diversity incurred in the standardization of security will offset the gains made by the changes made by PatchGuard.
November 24, 2006
On Testing, Part 1.
Testing a security product can sometimes be a very hard job. I don't mean internal QA-type testing, where you look for logic or syntax flaws in a system; I am talking about validating that a technology is effective against a difficult-to-enumerate threat. If you are testing a flaw finder, you can create code with a specific number of flaws and then refine the code until all the flaws are detected. Likewise, if you are writing a vulnerability scanner to search for known holes (ala Nessus), you can construct a pool of example systems where the flaw can be found.
There are many situations where a known set of test vectors cannot be created, making the validation of a security technology somewhat hairy. What happens when the product you are testing is designed to catch threats that are rapidly evolving? Building corpora of known threats for testing against a live system is somewhat futile if the time between when the corpora is constructed and used for testing is long enough for the threat to evolve significantly. Either the live system has to be tested against live data, or the system, which most likely has been fetching updates, has to be "rolled back" to the state it was in at the time each element in the test corpus was first detected.
Consider anti-spam systems, for example. Performing an accurate test of our software was one of the difficulties my organization, Cloudmark, has had with potential customers. One thing we stress, over and over, is that the test environment has to be as close to the production environment as possible, especially from a temporal standpoint. Just as users don't expect their mail to be routinely delayed by 6 hours before being delivered, evaluators shouldn't run a 6 hour-old spam corpus through the system to evaluate its efficacy as a filter. As the time between when a message is received at a mail gateway and when it is scanned by the filter increases, the accuracy of the evaluation should approach perfection, thus invalidating the test.
The "accuracy drift" of the anti-spam system over the course of 6 hours would be insignificant if it wasn't for the fact that spam evolves so damned fast. If spam didn't evolve, then Bayesian filters and sender blacklists would have been the end-all be-all solution for the problem. The past year has seen more and more customers realize, sometimes before we even talk to them at length about testing, that a live data stream is essential for evaluating a new anti-spam solution. I suspect this is because they found their previous methodology lacking when it showed that the competitor's product they bought performed more poorly than predicted by the test.
I started out this post by saying I would discuss security products, and so far I have only mentioned anti-spam systems. The issues with testing became apparent in this area first because of the number of eyes watching the performance of anti-spam systems, i.e. every e-mail user on the planet. In a later post, I will discuss why this also matters for both other specific security systems and the security space in general. For now, I am heading off to see a good friend before he leaves on an extended trip to the .eu.
November 26, 2006
On Testing, Part 2.
I had begun discussing the topic of testing security products in a previous post, where I began discussing the difficulty seen in evaluating security products. Essentially, the issues revolve around generating test vectors that is representative of the current threat state. If the system is attempting to counter a rapidly evolving security threat, then the time between when the test vector is generated and the time when the test is performed becomes critical for the fidelity of the test. For anti-spam systems, the length of time between when a test vector and when a test is conducted becomes critical in trying to quantify the accuracy of the solution once it is in production; spam evolves so fast that in a matter of minutes test vectors are no longer representative of the current spam state.
What about other filtration methods? In the past, Anti-Virus systems had to contend with several hundred new viruses a year. A set of viruses could easily be created that would be fairly representative of what a typical user would face for many days or weeks, as long as both the rates of emergence and propagation of new viruses was "low enough". This assumption, which no longer holds, worked very well when viruses were created by amateurs without a motive other than fame. Contemporary viruses are not written by kids screwing around, but by individuals attempting to build large networks of compromised home machines with the intention of leasing them out to for profit. This profit motive drives a far higher rate of virus and malware production than previously seen, as exemplified by the volume of Stration/Warezov variants, which have been causing many AV companies fits in their attempts to prevent this program from propagating all over the place. By testing against even a slightly-stale corpus, AV filter designers don't test against new variants, allowing them to claim far higher accuracy numbers than their products actually provide.
What's the big deal if people don't correctly perform testing? Well, engineers typically design and build systems to meet a specification, and they place their system under test to verify that the spec is being met. If their testing methodology is flawed, then their design is flawed. Eventually, these flaws will come to light in the public eye, as consumers start to realize that a product which claims 100% accuracy has been allowing an awfully high number of viruses to get through.
I am by no means the first person to discuss testing of security products. AV accuracy received quite a bit of attention when Consumer Reports attempted to test AV systems by using newly created viruses rather than the standard corpus. While their attempt at devising a new testing methodology was commendable, it is still not representative of how threats appear on the Internet. Using new, non-propagating viruses to test an AV system begs comparisons to the proverbial tree that falls in a forrest that no one is around to hear. Additionally, it isn't the incremental changes in viruses that are difficult to catch, it is the radical evolutions in viruses as well as the time required for the AV vendors to react that we have to be concerned about. These are things that can't be modeled via corpus testing, but via extended testing on live traffic.
We should be asking why people don't test more frequently on live data as opposed to corpus testing. I suspect it is because of two reasons: labor and repeatability. With corpus testing, you hand verify each element in the corpus as either being a virus once, and that cost is amortized over every test you conduct using the corpus. This isn't exactly an option with live testing, as every message that is either blocked or passed by the filter has to be hand-examined. There is also the issue of testing repeatability, where re-verification of previous results becomes difficult as the live feed evolves. Just because something is hard doesn't mean it shouldn't be done, however.
While systems are under live testing, the content they are filtering is being actively mutated to evade the system under test, essentially creating a multi-player noncooperative game with a limited number of participants. I will continue this discussion by examining the ramifications caused by this game in my next post.
November 30, 2006
On Testing, Part 3.
I have been commenting on the testing of security software, specifically anti-spam and anti-virus products. The main point I made in both of those posts was that testing has to be on live data feeds, regardless of how difficult the task, because the threats evolve at such a high rate that corpus-based testing quickly becomes stale and does not represent the true state of incoming traffic.
In situations where there are a limited number of security vendors and adversaries, even live testing becomes extremely difficult. Let's consider an extreme case, where there is only one security vendor and multiple adversaries. Every single system is identical, running up to date anti-virus packages. (Yes, I fully realize this is a completely unrealistic example, but bear with me.) From the standpoint of the testing and user community, the accuracy of the system is perfect; no viruses are seen by the system, as they don't even have an opportunity to propagate. At the same time, virus writers realize there is a huge, untapped market of machines just waiting to be compromised if they could only gain a foothold. These guys sit around and hack code until a vulnerability is found in the AV system, and upon finding it, will release a virus that exploits this in the wild.
Before the virus is released, the accuracy of the system is:
1. 100%: it catches all known viruses.
2. 0-100%: there is no way to test it.
After the virus is released, havoc breaks out, aircraft fall out of the sky, and dogs and cats start living together. 5% of all computers worldwide are infected before the vendor releases a patch. If the vendor was able to move faster, the number of compromised systems would have been only 1%, but left to its own devices, the virus would have compromised every system connected to the net. In this situation, the accuracy of the system is:
1. (1 - 1/(# of old viruses))*100%: only one virus couldn't be stopped.
2. 0%: no viruses were in circulation at the time except for the one that caused mass havoc.
3. (1-(# of compromised systems)/(# of uncompromised systems))*100%: the expected number of compromised systems at the end of the virus' run.
The third of these three accuracy measures seems the most appropriate, and the most flexible given a variety of network and economic conditions adversary styles. The measure, which is effectively the expectation of exploitation for a given host, is what is used today by anti-spam system evaluators. It is a slightly more sophisticated way of saying "what is the probability that a piece of spam will get through."
From a general security standpoint, however, it covers a difficult and often ignored parameter critical to the accuracy of a security product: response time. If the window of vulnerability between when the virus first appears and when signatures are issued is shrunk, the accuracy expressed by this metric improves. In fact, the Zero-Hour Anti-Virus industry is an emergent cottage industry in the security space. Ferris covered it back in 2004, and I talked about it at Virus Bulletin 2006.
Many of these zero-hour technologies are being used primarily in the message stream, but this probably won't last for long. I suspect the technology popped up here first because of the sheer volume of e-mail based viruses as well as the ease of which service providers, who ultimately end up spending money on these technologies, can quantify its cost. They store all mail then forward it along, unlike web-based trojans which just fly through on port 80, and have an opportunity to actually examine the number of viruses. As industry gains experience with automated means of identifying and distributing fingerprints or signatures for newly identified malware, we will see it spring up in other places as well.
December 7, 2006
Maneuver Warfare and Infosec Products
The modern practice of network security is essentially an exercise in information warfare. The two competing parties, namely the network operators and the botnet managers, are continually evolving to combat the other's tactics, each driven by economic motives. The attackers are attempting to create a distributed services platform out of the defender's systems for delivering... rich media content in the form of image spam, phishing landing pages, and DDoS packets, while the defenders are trying to keep their employer's underlying infrastructure in one piece. This is a very old analogy, one exploited heavily by individuals looking to grab funding earmarked for national defense or attempting to scaremonger groups into the potential threat of an "Electronic Pearl Harbor". The use of these analogies by demagogues does not make them any less apropos; there are many interesting conclusions that can be drawn from the application of modern military theory to the information security space.
Let's consider the somewhat popular work of John Boyd and the tenants of maneuver warfare. Maneuver warfare emphasizes rapid movement, distributed decision making, and dynamism of tactical objectives rather than the costly brute strength of an attrition campaign. This method of warfare has likely been around since the dawn of interstate combat, with Hannibal's tactics at Cannae serving as a brilliant example. In a briefing entitled Patterns of Conflict, Boyd formalized these ideas into what is now referred to as the OODA Loop. This is an embarrassingly brief description, but Boyd viewed warfare as being a a continuous cycle of Observation, Orientation, Decision, and Action, and that those who succeed in warfare are those who can correctly execute the loop in the shortest period of time. Another way of viewing it is whoever can predict their opponents next move and act/react before the other party can assess their situation will win the conflict. This can only be achieved by employing a fast operational tempo, rapidly altering tactics, obscuring your decision state from the enemy, and reducing infrastructure-based friction, such as communication cost.
The most effective infosec schemes on the market today rely upon principles that can be viewed as derivative from these lessons. The effective DDoS and Anti-Virus systems available today and under development seem to work by employing:
• Large sensor networks to reduce observation time.
• Automated analysis schemes with either zero in-loop human interaction or a slice of massive amounts of distributed human interaction to minimize feedback time.
• Rapid decision deployment to clients.
• Massive monitoring to detect and correct poor decisions.
• A large variety of detection and response tactics.
• Ability to quickly roll out new tactics in light of effective evasion methods.
As the tempo of financially-driven security events, i.e. spyware and its ilk, increases, any security system that is solely dependent on human-scale timelines to make decisions will labeled ineffective. Solutions dependent upon individual decision makers will have to complement their scheme with rapid reaction schemes or face decreasing continually decreasing accuracy figures.
December 19, 2006
Everyone point and laugh... (and why I should be faster with this site)
... at Checkpoint for buying NFR. Matasano and Tom beat me to the laugh, however. This is a lousy consolation prize for Sourcefire, which they attempted to buy last year. Does anyone even run NFR anymore?
December 22, 2006
NFR's Market Penetration.
I don't have any decent figures on NFR's market penetration, but I do know that the Sourcefire/Checkpoint deal was nixed because of security concerns. While the deal was canned shortly after the whole Dubai ports debacle, it was likely not due to xenophobia. The specter of a foreign government having ownership of a network monitoring technology with wide penetration in the defense sector was clearly unacceptable. I guess the implicit message here is that not many people use NFR anymore.
December 25, 2006
wavebubble
As a Christmas gift to the world, Lady Ada has posted the design for a microprocessor-controlled RF jammer called WaveBubble. It covers most of the important consumer bands pretty effectively, including WaveLan and GPS. She did an excellent job with the design, especially given the relative lack of equipment available in her lab.
I may have provided some assistance with the specs and layout for the RF chain, which is something I haven't spent much time looking at since I worked here.
January 12, 2007
New Blog: Matt Blaze's Exhaustive Search
UPenn Professor Matt Blaze has launched a blog on the first of the year. His cross-disciplinary writings on human-scale security are always worth reading, and it is probably worthwhile throwing his site into your RSS list.
Making money on stock spam.
Spam Stock Symbols
There have been many blog posts that basically say making money on stock spam is impossible, but I have to disagree. Sure, if you were to go long on the securities, you will lose a fortune. Over the short term, however, the spammers appear to be making a mint. I wrote a short article for an upcoming issue of IEEE Security and Privacy that essentially says that there is so much money being made on thinly traded equities by spammers that it is driving innovation in spam generation. I'll throw up a post once the magazine hits the presses.
January 13, 2007
More discussion on IronPort acquisition.
Quality of Life
This is a followup to a post i made earlier. Multiple analysts have chimed in on the IronPort acquisition, basically saying that all the old guard security companies are trying to grab a piece of the anti-spam pie.
January 19, 2007
90 years young.
Ztel1b
Today is the 90th anniversary of the transmission of the Zimmermann note.
January 31, 2007
Softtware diversity discussion over at nCircle
windows of our minds
An interesting write-up on software diversity popped up on the nCircle blog. In the past, this sort of crazy talk in industry caused authors to lose their jobs. Leveraging diversity to increase the attack tolerance of a network received attention in places relatively insulated from industry politics; I did some work for my Ph.D. that showed that the allocation of diversity could be expressed as a graph theory problem as well as it being an effective method for slowing a virus. Tim Keanini isn't trying to point fingers but is attempting to describe economically efficient means by which diversity can be realized in today's data centers.
February 17, 2007
Small steps and shiny buttons.
MEAT Buttons
It's been a few weeks since I posted anything substantial here. I took on a new role at work that has cut into the time I spend abstracting random security problems into bigger conceptual issues. That hasn't prevented me from writing, however. My article on stock spam, referenced here, made it into S&P this month. This is the first magazine article that I have written professionally, and while this may sound extremely dispassionate, the experience was very enjoyable. I like to write, and I didn't feel the pressures of proof and novelty that came with many of the academic publications I worked on in the past.
I started this blog as a scratch pad that I could use to transcribe random thoughts on techniques and trends in the industry, and then later on bake those into full blown articles for later consumption. The three testing articles have been repurposed for a work that will be published in Virus Bulletin shortly. The confidence that I gained by first jotting down random thoughts on the topic, sharing them with my community, then assembling them into a full blown article was invaluable, and a great way, for me at least, to build up an idea pipeline. Making sure I keep feeding the pipeline and posting blog entries will continually be a challenge, but at least I can establish milestones that are more finely grained than "concept" and "published work".
P.S. The picture is from the vendor table at the MEAT's 5 year anniversary party, held at DNA Lounge. I will post a few more pictures when I pull them off the camera or when they pop up on the DNA page.
March 2, 2007
Stock Spam, AV Testing articles now available.
Bird Flu Virus H5N1
I put up the Anti-Virus Testing and Stock Spam articles for public consumption.
May 18, 2007
What the hell have I been doing? Part 2: Data Representation
Like it or not, any analysis work that you do is pretty much worthless unless you are able to present the data effectively. Effective data presentation becomes more difficult when new data has to be consumed on a regular basis. Hand-massaging the information is forced to take a back seat to automation, otherwise you (the analyst) will spend your entire life recreating the same report. The data also has to be extremely accessible, otherwise your customers will just not even bother looking at the information.
For example, lets consider the story of some data analyst named... Rudiger. Rudiger has a large volume of numbers about... virus outbreaks locked up in SQL somewhere. Using the tried and true methods acquired as a grad student, Rudiger glues some Perl scripts together followed by smoothing and other cherry-picking using Matlab or, god forbid, Excel. As people ask for the data on a more frequent basis, our intrepid hero tries to come up with more additional automation to make his report generation easier, with graphs e-mailed to him and other concerned parties on a regular basis. He quickly discovers that no one is reading his data-laden e-mails anymore, leaving poor Rudiger to announce conclusions that others could draw from simply looking at a graph provided for him.
What Rudiger doesn't quite realize is that people need to be able to feel like they can own data on their own and manipulate it so that it tells them a story, and not just the story that the graph Rudiger wants them to see tells them. In much the same way that many "technical" (absurdity!) stock analysts will generate multiple forms of charts rather than looking at the standard data provided by financial news sites, data consumers want the ability to feel they can draw their own conclusions and interact in the process rather than be shown some static information. There are several interweb startups based upon this very concept.
For those of you who haven't figured this out by now, I'm Rudiger. Rather than send out static graph after static graph that no one looks at, I learned a web language and threw together an internal website that allows people of multiple technical levels to explore information about virus outbreaks. While it is nowhere as sophisticated as ATLAS, the service tries to emulate Flickr's content and tag navigation structure, where viruses are the content and tags are what we know about the specific threat. The architecture is easy to use and provides both a low barrier to entry, as everyone knows how to use a web page. Also, the "friction" associated with the data is low, as anyone who is really interested can subscribe to an RSS feed which goes right to a web page on the virus; two mouse clicks versus pulling data from SQL.
I am generally more accustomed to writing english or algorithms rather than web code. Frankly, I hadn't produced a web app since PHP 3.x was the hotness. After consulting with some of my coworkers and my old friend Jacqui Maher, I decided to throw the site together using Ruby on Rails. With Jacqui on IM and a copy of Ruby on Rails: Up and Running in hand, I went from a cold start to a functioning prototype in about 2 weeks. I was pretty surprised with how far web development has come since 2000, as ad-hoc methods for presenting data from a table have were replaced with formalized architectures integrated deeply into the popular coding frameworks.
Moral(s) of the story?: Reduce the cost and barriers to analyzing your own data. Put your data in the hands of the consumer in a digestible, navigable form. Remove yourself from the loop. Don't worry, you will still be valuable even when you aren't the go-to guy for generating graphs, as there is plenty of work to go around right now.
[Sidenote: The sad thing is I learned this lesson about reducing the burden of analyzing regularly generated data once before. The entire motivation behind a project I consulted on many moons ago, namely Sourcefire's RNA Visualization Module, was to provide attack analysts with an easy-to-absorb presentation of underlying complex data.]
May 31, 2007
Many of the customers I engage with at work have been struggling with how to identify and handle the botnet drones. Now, I am going to assume that everyone who either reads or stumbles upon this page has some understanding of botnets and their impact. Over the past several weeks, Estonians have become very familiar with the effects botnet-enabled DDoS attacks can have on everyday life. The networks are the prime source of spam. There is common agreement that yes, botnets are a problem and yes, they need to go away. Who should actually bear the burden of de-fanging these networks?
Disarming the actors behind these attacks involves dismantling the botnets themselves, which is itself an increasingly challenging problem. Older-style bots used IRC servers as a central command-and-control mechanism, making them vulnerable to decapitation attacks by security personnel. Newer systems use P2P-style C&C protocols adapted from guerilla file-sharing systems that are notoriously difficult to control. Other than traffic and content mitigation, which several organizations have proven to be extremely effective, the solution is to take down botnets node-by-node.
So who should eliminate botnets? End users don't feel responsible or even recognize that there is a problem; all they know is that they are using their computer then someone comes along and tells them they are infected with a virus. Service providers (telephone and cable companies) with infected customers aren't really responsible, and pay the cost through outbound bandwidth charges and outbound MTA capacity, which is relatively minor charge compared to the people who are the targets of the attacks. Operating system vendors aren't responsible, because once they sell the product to the customer, they are no longer liable for if, when, or how the customer becomes compromised. Ultimately, the people who bear the largest cost are the ones who are least capable of remediating the source of the spam, namely the service providers of the attack recipients. These actors have to pay for bandwidth for inbound attacks, storage for spam, and support calls from their customers asking why their computer is slow when it is, in reality, a botted system.
In many ways, we have a classic Tragedy of the Commons-type issue. The communal grazing areas, or shared resources that were critical for the working class' ability to make a living, have been replaced by today's fiber lines. Currently the "tragedy" is solved via by bandwidth providers through embargoes of one another: if one service provider gets out of line, the others will block all mail originating from the offender. Recently I have been pondering another possible solution, one based upon financial mechanisms.
While it would likely be impossible to implement, a Cap-and-Trade-style trading system seems extremely appropriate. Similar to carbon trading schemes, a cap-and-trade system for malicious content established between providers would create economic incentives to correctly monitor and reduce the volume of unwanted content that flows between their networks. The system would involve a cap on how much malicious content the parties would deem acceptable to send to one another. Providers who are able to better control the amount of malicious traffic, through expenditures on personnel and products. They can recoup those costs through the sale of credits associated with the difference between their level of outbound malicious content and the agreed-upon cap. Providers who don't police their traffic are forced to buy credits from those who do, which in turn puts a price on their lack of responsibility.
Eventually, the provider may choose to expose this cost of security to the end user, with rebates or special offers extended to users who keep their systems clean and never cause a problem. The end users in turn are incented to keep their machines clean, the Internet would return to the pre-fall-from-eden utopia that it once was, and the world would be a happy place once again.*
* Having providers buy into this concept, building a monitoring infrastructure, setting prices, assembling a market, and maintaining a clearinghouse for credit trades would be pretty damned hard. I don't think this is a practical idea, it does make for a fun thought experiment.
June 1, 2007
Who cares if a spammer is arrested?
I was quoted in the USA Today regarding the spammer who was recently charged with multiple counts of being a general pain in the rear and being an accessory to being a pain in the rear. I talked to several reporters about this yesterday, and here are some of my soundbites which may or may not have made it into print:
• Spam, like most forms of organized crime, is too profitable to end by arresting single individuals.
• This arrest solves a spam problem from four years ago, and not today's issues.
• The spammers are manipulating equities markets and compromising financial accounts. Anti-spam regulations are the least of their concern.
June 9, 2007
Testing show AV sucks.
The sorry state of industry-accepted anti-virus tests is gathering some attention in the technical hobbyist's press. New, independent testing organizations are getting into the act as well. Eventually someone in the mainstream press will pick up on the topic, but I doubt that it will lead consumers to purchasing AV products that actually work. Viruses and trojans have become far better at hiding their presence from the end user; unlike 10 years ago, we rarely hear about systems being wiped out by a virus. Most infected consumers don't realize it, and may not feel they need to remediate the issue. After all, if their mp3's aren't being deleted, who cares? Infected systems affect people around the user more than they do the user him/herself. The spam they send out goes to other people, and not to their own inbox.
Sidenote: I am very surprised at the low numbers quoted by av-comparatives for Kapersky's scanner.
June 10, 2007
Quote on Defense Technology
“No single defensive technology is forever. If they were, we would all be living in fortified castles with moats.�
-- Michael Barrett, CISO @ Paypal; via an article by Brad Stone on CAPCHA's.
June 13, 2007
Second Life explains Defense in Depth
This has to be the greatest thing I have ever seen on youtube. Link found on Schneier's Blog.
June 15, 2007
DEFCON and the TCP/IP Drinking Game
I will be standing in for Mudge for this year's TCP/IP Drinking Game. Drop me a line if you have suggestions for panelists.
June 19, 2007
Baysec 2 - The Baysec-ening Tomorrow, June 20th!
For those of you who live in the Bay Area, tomorrow is the second monthly meetup of security professionals known as BaySec. This month's will be held at 21st Amendment, located on 2nd between Brannan and Bryant in San Francisco. Much thanks go to Nate Lawson for promoting this event!
August 9, 2007
The Security Innovator's Dilemma (Part 1)
The most common themes I heard during this year's BlackHat conference were driven by the implications of the underground economy. Monetization of the attack space has dramatically changed how the information security community handle emerging threats. Practitioners no longer talk about 100% effectiveness and other meaningless metrics and instead focus on minimization of harm. I have been towing this line myself for some time, and I would like to share with you the general framework in which I think about security in this current context.
Five years ago or so, Dan Geer and several others put forth the concept that the root cause of infosec issues was the monoculture of Microsoft systems. No longer a controversial idea in the community, the statement caused a gigantic uproar at the time, leading to Dr. Geer's departure from @Stake. The paper was a milestone for those working in the security economics field, as its basic postulate linked the creation of individual exploits to the value that can be derived from an exploit. In other words, people exploited windows because their work would create far more value for the author, as it could be applied to the vast majority of computer systems in the world.
We can formalize this concept as a zero-sum non-cooperative game. Consider two players, the Attacker (A) and Defender (D). A and D can either attack/defend one of two classes of system, denoted 1 and 2. Systems 1 and 2 cover assets valued at v1 and v2. A given system may be the entire class of Microsoft OS's, a class of messaging technologies (e-mail vs. SMS), processor architectures, Anti-Virus products, etc. The value associated with a class of systems is what the attacker assumes the monetization rate to be for that class of products: a block of ATM machines versus several hundred spam generating home computers. I digress.
During each iteration of the game, the defender can invest his energy into defending either of the two systems. If the defender chooses the same system n as the attacker, then he has a probability p of success, giving the attacker an expected payoff of (1-p)vn. If the attacker and defender choose different systems, then the payoff to the attacker is vn, as the system is undefended.
One of the implications of the model is that there are situations where it is never the best decision to attack the system that covers the least assets, even if it is undefended. If we consider two system classes n and m, if the value of attacking the defended system is greater than that of attacking an undefended system ((1-p)vn > vm), then the strategy of attacking vn strictly dominates the strategy of attacking vm. In other words, a rational attacker will ignore an unprotected system if he or she can profit by attacking a far more valuable but defended system.
This appears to be a validation of the concept of software diversity, but I consider this model to be interesting for a very different reason: it effectively segments the market for both attacks and defenses based upon what I call quantifiable rationality, or whether or not someone can put a dollar value on the work that is being done. Attackers and defenders who choose to go after systems which are either minimally valued or difficult to value are doing so for publicity, which is notoriously difficult to economically quantify, or expectations that the future will shift the relative valuations of the protected systems. Likewise, attackers and defenders who focus on the highest valued systems are the same individuals who are able to truly quantify their market. Consider the iPhone browser vulnerabilities and SMS spam, Hypervisor Rootkits and Detection and actual working AV Technologies, and Network-layer Firewalls and Application Layer Protections: each of these parings consists of a concept that either dominates either the mind-share or security market, while the problems that cause true financial pain points remain unaddressed.
As we will see in a later post, the two halves of the security market act in very different ways, necessitating different technologies and business practices.
August 14, 2007
Next Baysec: Next Monday! (8/20)
Nate Lawson has posted the next Baysec date. Hint: it's Monday the 20th.
August 26, 2007
Monocultures Abound
Everyone probably saw the two items I'm mentioning, but if Windows Update == a DDoS against Skype, then you've just proven the monoculture conjecture. Similarly, if you can slow down the entire Internet with a 9mm, then you've just proven the fragility conjecture.
-- Dan Geer on the DailyDave mailing list. Via Ralph Logan.
September 8, 2007
If you are involved in enterprise software development, you have heard of the Gartner Magic Quadrant. The vendor is charted on "completeness of vision", or how well they understand where the space is going, and "ability to execute", how well they can complete that vision. Organizations that have both are in the "Magic Quadrant."
Enterprise software vendors have MarCom groups which are almost solely tasked with putting their company in the magic quadrant. This usually has nothing to do with the quality of the solution, of the vision of the organization, but the MarCom's group to be able to sell people on the solution or on the organization's vision. If the group successfully puts the organization in the Magic Quadrant, they all earn their bonuses for the year.
As a result, I hearby re-christen the Magic Quadrant the Gartner G-Spot.
Keep that in mind every time people panic over Gartner ratings, and you will feel a little better.
November 29, 2007
Security Implications of "Two Chicks, One Cup": not a joke.
I can't believe I am writing this... but...
Several weeks ago, a video entitled "Two Chicks, One Cup"/"Two Girls, One Cup" was posted on the Internet. Mercifully briefly, it is a segment of film clipped from a coprophilia porn. The emergence of this video has been regarded as Web2.0's goatse and tubgirl, or single images of... lets say sexual activity that is several sigma's away from the norm. Exposure to this remarkable product of human ingenuity has increased dramatically since BoingBoing, one of the most popular blogs on the net, started referring to it recently. Reaction video's of people watching the video for the first time are more popular than the video itself. This meme seems to have a good bit of life in it as semi-professional spoofs (sfw) and follow-up videos, such as "Two Chicks, One Finger" (NO WAY SFW), started appearing on the net.
Websites have started sprouting up that claim to host the video, but actually host malware. If you attempt to search for either "Two Chicks(Girls), One Cup" or "Two Chicks(Girls), One Finger", you may end up at malware sites likes these. This is similar to the codec attacks recently described by Sunbelt Software. I am concerned that... I can't believe I am writing this... security vendors will be loathe to post warnings regarding malicious versions of the content because the content itself is so wretched. Users who become infected won't want to admit they were infected with malware while watching two women smear each other with shit and/or vomit.
You have been forewarned. Now I have to go bleach my eyes.
December 5, 2007
Next BaySec Tomorrow, 6 Dec 07
Yep, BaySec is tomorrow night, December 6th at 7pm. See you at O'Neils.
December 8, 2007
Symantec marketing is getting better.
This Norton Fighter shtick is far better than the Symantec Revolution jingle.
December 22, 2007
Army Fights the Monoculture
A colleague from graduate school, Nick Kirsch, sent me this article that discusses the military's plan to incorporate Macs into their IT infrastructure. He said it was evidence that someone read my thesis, but I suspect Dan Geer's work had far more influence.
January 14, 2008
BaySec: Thursday, 1/17 @ Pete's Tavern
BaySec is going to be at Pete's Tavern this month, just down the block from O'Neill's. NYSEC (NYC) and BeanSec (Boston) are on Tuesday and Wednesday, respectively. I dare you to hit all three.
January 19, 2008
Yes, Virginia, there is a Santa Claus SCADA Attack
Long-predicted attacks against infrastructure control systems (SCADA) have arrived, according to the CIA. Bejtlich doubts its authenticity, but I have every reasons to believe it to be true for the following reasons:
• Bellovin correctly pointed out that maintaining the air gap between critical networks and non-critical networks is nearly impossible, making the likelihood that at least a few critical networks are somehow connected to the public internet extremely high. Information behaves like heat, in that it leaks out unless tightly constrained, like hot coffee in a dewar flask.
• My old business partner Ralph Logan was quoted in the article. Given the work we did together and the work that he does now, I consider him to be an absolute authority on the topic.
• The early monetization techniques employed by attackers whenever they discover a tool are usually extortion-related schemes. The first botnet business model was based upon DDoS extortion, where victims were taken off of the network if they didn't pay the attacker protection money. Here we have attackers demanding protection money in exchange for not taking down the power grid. Botnets evolved into spam and phishing engines. I am willing to bet that the next step in the racket will involve selling the attacks to nation states now that infrastructure attacks have been reduced to practice.
February 7, 2008
RIAA's Slippery Slope
Gizmodo reported today that the RIAA has been asking for the AV vendors to filter for pirated content. You are walking down a slippery slope if you conflate arbitrary content filtering with information security. Anyone who is savvy enough to bootleg media is also savvy enough to disable their AV filter, which would quickly cause the system to become compromised. Additionally, these users are not likely to detect any infection that does occur, leading to yet another system sending out spam and malware. In short, BAD IDEA.
I want names.
Holy Jesus, who is responsible for this:
Having CheckPoint sing the first few lines from My Way is hilarious.
Via the Hoff.
February 27, 2008
ISOI4
I will be at ISOI 4 presenting a completed and extended version of this discussion. Slides to follow...
February 29, 2008
I am so Web 2.0 I am Web 2.5
Keynote apparently allows you to send your slides straight to YouTube, so here are my slides for today. I also opened up a Twitter account after hearing about its use amongst the other the security bloggers.
Note: slides are back!
March 12, 2008
Wireless attack against a heart device? Duh.
So someone announced a wireless attack against an implantable cardiac device. While it does make for good press, I can see many valid arguments against the required remediation step, namely authentication of cardiac device programmers. Authentication of the cardiac programmers may impede use of the programmers in an emergency by an ambulance crew, for example. Additionally, key revokation would require surgery. This would be bad news. Long story short: interesting class of attacks, but don't freak out about it to your cardiologist; you could give yourself a heart attack that way.
As a side-note, medical devices have a long history of spoofing attacks, though. I do remember Joe Grand built a Palm Pilot program to control IV drug infusers maybe a decade ago.
March 14, 2008
Dan Geer's SOURCEBoston Keynote
I am moving my RSS Feed over to FeedBurner. Please tell me if it breaks for anybody.
Okay readers, if you are reading this via RSS, the cutover to the FeedBurner Feed should be complete.
March 15, 2008
SOURCE Boston 2008 Wrap-up
SOURCE Boston 2008 was a huge success. We could not have hoped for a better outcome from a first-year conference. The conference hit great niches, namely application security and the business of security, as evidenced by our attendees' responses.
Some important points:
• Dan Geer's talk made my trip. In what was probably the most intellectually stimulating hour I have had in a long time, Dan examined the current and future state of network security leveraging lessons from evolutionary biology and economics. It is a must read.
• The L0pht panel was hugely successful, and it was probably the first time I have seen a standing-room only crowd at the last talk of a conference. Here are some solid pictures of the event.
• All the attendees had a blast, as evidenced by multiple Flickr photo pools.
• Twitter was the communication mechanism for the conference. Jennifer Leggio herded the numerous security cats into using it, and it worked extremely well. She has been continuously updating a list of security twits, many of whom you may know, if you want to get into the game. Here is my feed.
March 17, 2008
Best meme to come from SOURCE Boston...
Certified Pre-0wned. Think malware-infected picture frames.
March 18, 2008
Macs and AV Software
Mogull published an article on TidBITS discussing the issues surrounding Mac AV. It is a solid read. I threw some quotes his way based on some of my recent game theory work.
USA Today, KOMO Interview...
I was interviewed in the USA Today this week, along with friends Rick Wesson, Jose Nazario, and a large group of security researchers who are all far more intelligent than myself. The article lead to a radio interview for KOMO 1000 in Seattle. I slapped a photo onto the interview and voila, web 2.0 magic:
March 19, 2008
We will pay you to host malware.
Apparently InstallsCash's business model is to pay people to host malware. Fantastic. Thanks to a friend for the heads up.
Keynoting 2008 MIT Spam Conference
It appears that I will be giving the keynote of the 2008 MIT Spam Conference. Drop me a line if you will be in attendance.
March 22, 2008
Unusual blog spam vector exploited
Security Blog MCWResearch was hit by a large amount of spammy posts over the past day. It turns out the blog allowed posting via e-mail, and this feature has been subsequently disabled. I wouldn't be surprised if we see an enterprising spammer search for populations of e-mail to blog gateways. They can use their preexisting infrastructure to push spam into a new direction. Remediation for the population would be trivial, as e-mail-to-post functionality is not critical for the functioning of blogs.
Lesson learned: don't allow unauthenticated access to services unless you are required to do so (inbound MTAs, public web servers, etc).
March 27, 2008
Social Network Phishing
Phishing doesn't just happen against banks. It also hits social networks, including MySpace and Facebook. Phishing only occurs if the target can be monetized; in other words, the phishers have to make money. Early social networking phish were likely extensions of the ransomware methodology, where money would have to be exchanged for the account to be turned back over to the phished user. Nowadays these phished accounts are being used to send spam and phish to social network users, propagating the problem.
March 29, 2008
MIT Spam Conference 2008 Followup
Here are my slides from the spam conference keynote I mentioned earlier. These are a refinement of the ISOI slides I posted back in February.
It seems that SlideShare produced far nicer results with this type of content than YouTube.
April 1, 2008
Security Marketing: Hugs for Hackers
AVG's Hugs for Hackers is definitely less mean-spirited than Palo Alto's Security Idol.
Applied Security Visualization gets a bookcover
Raffael Marty's upcoming book, Applied Security Visualization, now has book cover art.
Judging from the cover art, I think the book has something to do with applied security visualization and dinosaurs with targeting reticles etched into their eyeballs.
April 2, 2008
CEAS CFP Extended
If you were planning on submitting a paper to CEAS, the Conference on E-Mail and Anti-Spam, you now have a few more days. Although it is not yet reflected on the website, the CFP has been extended to April 10th.
April 3, 2008
Biological Niches and Malware Prevalence
During a recent presentation I was asked a rather astute and interesting question. The audience member compared the information security world to the biological world, and wanted to know why, when parasites fill every biological niche in the ecosphere, the niche of Macs has not been infested with malware. I have now forgotten what I said in response, but I do remember thinking at the time my answer was bullshit.
The correct answer is as follows: The biological analogy frays at the edges when you consider monetized malware. Parasites inhabit every biological niche because their only goal is to propagate the species, not be the biggest species out there. Malware writers' goal is to make the most money, and will spend their energy creating attacks that allow them to make the most money. The motive of profit maximization causes them to abandon portions of the target space entirely. In terms of the biological argument, consider a parasite was not rewarded for continuing its species, but instead was rewarded for the number of infected hosts. If the parasite had the opportunity to make the split decision between producing offspring that can infect coelacanths or infect beetles, which would be the better strategy?
April 5, 2008
This article on the potential emergence of Macintosh malware appears with auspicious timing.
April 7, 2008
RSA hates the Irish
Because I have an apostrophe in my last name, I attempt a SQL injection attack every time I fill out a form. The RSA conference is aware of this, and requires everyone who has an apostrophe in their last name to stand in a separate line. Apparently they have not yet learned that it is possible to secure a webapp against the dreaded ' without blacklisting the content.
I find this to be equivalent to segregation against those of us who have apostrophes in our name, and by the principle of transitivity, RSA is attempting to segregate out the Irish without posting an "Irish Need Not Apply" sign. Mark my words, first they will come for our crypto keys, and then they will come for our potatoes.
April 8, 2008
Maybe RSA doesn't hate the Irish.
Bono was walking the RSA floor last night. He was there for Nokia, which rocks security apparently. I guess RSA doesn't hate the Irish too much.
April 10, 2008
RSA still hates the Irish.
Nokia, the phone company that doesn't do security but does OEM SourceFire and CheckPoint technology, brought in the fake Bono.
Let's say you are a startup and you choose to use the Google App Engine for your infrastructure. If Google buys you out, they don't have to port the code. They directly quantify your company's technology opex and revenue, since they see both the CPU overhead and the eyeball count via Google Analytics. Brilliant.
April 11, 2008
Malware shifts and value chains.
Amrit Williams is calling me on predicting malware emergence. His assertion is that by the time AV improves enough to push attackers onto Macs at their current market share, then attackers will shift to another layer altogether and abandon the idea of monetized malware. I had always assumed that the value chain established by attackers would be largely preserved, but he may be right: there could be a point where AV is so good that attackers will just move to popping webmail accounts and routers rather than attacking client systems. Now wouldn't that be nice.
April 14, 2008
What the hell have I been doing? Part $e^{j\pi}$
I just submitted an article for IEEE Security and Privacy and spent the past week attending RSA. I did do a podcast for Schwartz PR during their RSA party that is available here.
April 17, 2008
How Storm Communicates
Thorsten Holz and team put together a fantastic paper on how the Storm Worm communicates and how it can be infiltrated. Thanks go to Jose Nazario for the heads up.
April 23, 2008
Storm Defeated?
Apparently if you have kernel-level and below control of every Windows PC out there, you can pull out a botnet infestation. Let's see how long it takes for either the botters to be caught or for a new infection to come out that disables Windows Update. Thanks go to Bryan and Jose for the heads up.
April 29, 2008
Kraken Reveng
There is a solid writeup by Pedram Amini @ TippingPoint on the Kraken RevEng here and here. Thanks to Richard Soderberg for the heads up.
May 3, 2008
Spam is now 30.
Spam is now 30. Frankly, if spam still bothers you after all this time, buy a better filter.
May 12, 2008
Processing ported to Javascript.
The domain-specific visualization language Processing has been ported to Javascript. This is a "good thing". Thanks to Raffael Marty for the heads up.
Also, this is my 100th post.
May 22, 2008
Game Theory of Malware article online.
I wrote an article on the game theory of emergent threats that is now online. It is based on presentations from earlier this year. You can grab the article here.
As a sidenote, Amrit thinks that security people like to talk about game theory because they like to play video games. I of course strongly disagree. I will have you know the only video game I still like to play is portfolio explosion on e-trade.
May 30, 2008
Hoff had a post about a VirtSec startup known as Hyperbole. Their product/feature names include such gems as HyperTension, HyperSensitivity, and HyperVentilated.
...
All I can think is of some CSO three years from now muttering "I bought HyperTension and all I got was hypertension."
June 9, 2008
Amrit: iPhone creates mobile malware tipping point.
Amrit Williams gets the first post on how the iPhone 2.0 will create the domestic mobile malware tipping point. What is a malware tipping point you may ask? Well, you can read about that here.
Sidenote: I believe SymbianOS 2nd Edition may have created the international one some time ago.
June 17, 2008
The highest form of flattery.
Juan Caballero, Theocharis Kampouris, Dawn Song, and Jia Wang published some interesting extensions at this year's NDSS of the work presented by Harish Sethu and me at CCS '04.
Both papers examine the software diversity problem, which states that networks of systems would be more secure if they minimized the number of possible common mode faults by running different software and operating systems, by relating it to the graph coloring problem. The thesis of both papers is that software diversity can be improved by using graph coloring algorithms to maximize diverse software allocation.
This title of this post's implication is only in jest, as I am incredibly happy to see our idea extended by the research community.
June 18, 2008
Next BaySec: 6/19/08
The next BaySec will occur tomorrow night, 6/19/2008 at Pete's Tavern in San Francisco. Thanks to Ryan for setting it up and to Nate for passively reminding me to blog it.
June 20, 2008
Best Security Marketing Video Ever.
Kaspersky did this:
Hats off to Ryan Naraine for finding it.
June 26, 2008
If you liked the game theory stuff...
... nominate the work for a pwnie. I won't nominate my own work (tacky), but I am not above shilling my own work (only slightly less tacky).
July 6, 2008
Spammers went after Twitter pretty hard this holiday weekend using the "friend invite" model that was first developed against other social networking services. Briefly, the attack involves creating a large number of spammy profiles and then inviting people to view the spam by performing a friend request, or in twitters case, "following" the spam target. I have included screenshots of a few of these attacks.
An individual can remediate this attack in the short term by disabling e-mail notifications of people following you. This is by no means an optimal solution. The only people who can really address the situation is twitter, through a combination of blacklisting, throttling, CAPTCHAs, and content analysis.
July 8, 2008
I guess this is easier than making a little graphic of your e-mail address. The attack surface for reCAPTCHA is pretty large at this point, and web page scraping is not the only means by which a spammer can grab your address, leading me to question how effective this will be for keeping your inbox clean. Thanks to Jennifer for the heads up.
July 10, 2008
Westside!
I was interviewed by SC Magazine's Dan Kaplan on the value of education in the security industry and its associated interpretation on both the west and east coast.
July 14, 2008
CoverItLive Event on Social Networking Security
I will be co-hosting a live blogging event on social networking security tonight with Jennifer Leggio on CoverItLive. You should be able to view the content in the horrifying iframe below here:
Thanks go to Plurk's Plurkshops for sponsoring the event.
Attackers hit close to home.
My wife Sophy's gmail account started spewing spam this morning to everyone in
her sent mail folder. Given that my wife has been working in technology for
about as long as I have been in information security, and specifically three
years in anti-spam, I was both slightly intrigued and rather miffed when I
received the following message in my inbox:
If this were a PC laptop, I would chalk this up to a desktop compromise. There
has not been a significant number of reports of OSX malware that does address
book scraping, making this possibility rather remote. I had Sophy immediately
history:
If we take a closer look at 123.12.254.155, we can see the IP doesn't exactly
reside in San Francisco:
route: 123.8.0.0/13
descr: CNC Group CHINA169 Henan Province Network
country: CN
origin: AS4837
mnt-by: MAINT-CNCGROUP-RR
changed: [email protected] 20070111
source: APNIC
I am pretty certain that neither of us were in China this morning, and at this
point I was certain that her desktop was safe as the compromise likely affected her
webmail account only. I later discovered that Sophy had used similar passwords
on multiple websites, leading me to believe that one of the many websites she
accessed was compromised, handing the attacker a legitimate Gmail login (her
The moral of the story is that you absolutely have to use a different password
for each and every website you use, or at least cluster your accounts based
upon attack propagation tolerance. In other words, you can use the same
password across multiple junk message boards, but doing the same across
multiple financial websites would be Bad.
Oh, and the attackers didn't just send spam from her mail account, they also
deleted all her mail on Gmail. Because Sophy maintains backups of her mail, a
potentially stressful day was avoided. Oh yeah, thats the other moral of the
August 5, 2008
Vegas
I will be in Las Vegas for the Blackhat and Defcon conferences this week. I hope to see you all there!
Defcon TCP/IP Drinking Game
I will be hosting the Defcon TCP/IP Drinking Game again this year. Drop by Friday night to see your favorite information security experts make fools of themselves.
August 9, 2008
I have been at BlackHat/DefCon since Tuesday, and I have been slightly out of the loop on some recent security events. Coincident with the presentations on social network security and new XSS attacks against MySpace, reports of a worm hitting MySpace and Facebook started trickling in via SMS messages from our team back at the office. My initial concern was that this was a full-blown Samy-style worm hitting both social network sites, and some of my comments were oriented towards this threat.
It turns out that the MySpace/Facebook worm was less a worm and more a standard malware-push technique. Rather than having malware infect a system to send spam to other users that enticed them to install the same malware, the authors had the malware hijack MySpace and Facebook profiles on login by the user, spamming their friends with a malware download pitch. Basically this ends up being a hybrid worm, that requires more than just pure browser support, like XSS and CSRF attacks, to propagate. Good show, spammers.
The interesting part of this incident is that attackers, the media, end users, and vendors are focusing on this as a social networking story and not a desktop malware story, when it is equal parts of both. It is further evidence to me that desktops are being considered by home users to be nothing more than browser containers, with their activities being almost completely focused around a handful of major (social) web properties.
August 11, 2008
What a difference a word makes.
I enjoy talking with reporters, and I do so quite frequently. It is part of my responsibilities at Cloudmark. Thankfully, most of the guys I talk to on a regular basis are extremely responsible, detail oriented, and diligent about the facts; a single omitted word can radically alter the meaning of a phrase.
Chris Hoff, a very well seasoned speaker and media contact, is now experiencing the repercussions of such an error. By dropping the word "security" from the phrase "Virtualizing security will not save you money, it will cost you more.", a reporter changed Hoff's statement from a negative statement about the security to a negative statement about his employer. As you can imagine, this has caused a massive headache for Hoff and his employer.
The only way to fix any misquote in the current media climate is to generate corrective content early and often, as I am doing with this post.
August 12, 2008
The web has started commenting on twitter's decision to limit the number of accounts that a given user can follow. Having a hard limit is a smart move for multiple reasons. Not only does it allow you to more finely bound the computational load of the message passing architecture, it negatively impacts only two groups, namely spammers and the obsessive-compulsive.
This is a good first step that I have pointed out in an interview once before. I suspect that Twitter will also be working on a throttling policy as well as an IP and content blacklisting technology as follow-on mechanisms to continue to battle spam.
September 8, 2008
ZDNet
I am now also blogging for ZDNet.
March 30, 2009
Back from ZDNet, but soon a new home.
Blog banner
After seven months of blogging at ZDNet, I am back to the personal blog. The fall-off in advertising revenue across the media space has necessitated cutbacks, and my spot on the security beat was axed.
I won't stop generating content, but I am not quite sure where it will be hosted right now. I will update you as soon as I find out.
In the meantime, here is a full list of posts I have authored on ZDNet, and I hope to see many of you at RSA. Also, here is my updated RSS feed.
Take care.
April 9, 2009
Conficker wakes up to push spam and... scareware?
The Conficker worm has woken up to... drumroll please... push fake antivirus products and spam from an older piece of spam-generating malware. It appears that like many Bay-area startups, Conficker is long on technical ability and short on innovative business models.
I am not trashing the MMBA (Malware MBA)'s ability to extract money from criminal activities. There really are only a handful of ways malware authors have shown they can successfully make money: they can sniff keystrokes, send spam, DDoS websites, or re-sell access to their software and machines to do the same work. However, for all the hype that surrounded the worm I expected something far more sophisticated.
The story for the average consumer is pretty basic. First off, you should not be using any anti-virus software that magically pops up on your system that you have never heard of before. If you are reading this website, chances are you already know this. The spam engine sounds like a ripoff of older technology, so we should expect no dramatic shift in spam mutation techniques. We should expect an increase in spam delivered to people's inboxes due only to the increase in the volume of spam transmission attempts.
Then again, while it is unprofitable, tomorrow the Conficker writers could push down a DDoS package and melt the Internet. This isn't alarmism, it is just what is possible when a single group controls a very large botnet.
April 11, 2009
85% to 95% of all e-mail is spam? Yeah, that makes sense.
There is only one security problem that the average consumer will get visibly angry about, and that is spam. Well, that and identity theft, but spam ranks pretty far up there. When I tell people I work in anti-spam as my day job, I get a pat on the back and a comment about how they can't believe how much spam there is in their inbox. To reinforce what we already know, security companies publish statistics claiming that, depending upon the day of the week, 85% to 95% of all e-mail is spam. While this number is seemingly unbelievable, I can guarantee that it is correct. How did we get to the point that approximately 9 out of every 10 e-mails is spam? Paradoxically, the reason why we have so much spam is because our anti-spam is so incredibly effective today.
To understand why this number is not really that shocking, it is helpful to think of spam not as a singular entity but as a living, evolving creature that has responded to spam filters in new and unique ways. Let's imagine you are at a cocktail party in a nearly-full room with a number of people having a good time. As the evening progresses, the ambient noise in the room gets progressively louder. People respond to the increasing loudness in the room by straining their voices, and eventually the room is a 70dB cacophony of random chatter. The same kind of relationship exists between spam filters and spammers.
Spammers want to be heard, and will accept a certain rate of response to their content. Before the days of ubiquitous spam filters, they would generate content at a far lower rate, since they were getting responses at that rate. As decent spam filters became standard operating equipment on the Internet, the spammers needed to change their game to continue being heard. They did this by mutating their content and sending spam from more locations, resulting in a higher rate of delivery attempts. Again, anti-spam responded with better filters that looked at both content and the IP address of the send systems, and the spammers responded in kind by pushing their mutation rates and transmission rates further up, thus leading to these almost unbelievable spam rates.
April 19, 2009
Have we reached the Mac Malware tipping point yet? Eh... maybe?
The technical media is all a twitter over what appears to be the emergence of the first mac botnet. The infector appears to be an updated version of a trojaned version of iWork that popped up earlier this year. Anyone who has worked as a Windows virus analyst would scoff at the relatively unsophistication exhibited by the malware, but nevertheless, it is a piece of malware, and it is out there. I wanted to take this opportunity to answer some of the most common questions people have about mac malware.
Does this mean that Mac users should rush to buy anti-virus software and expect their machines to end up as compromised as a PC? Probably not, but soon. For now, as long as you aren't downloading pirated software you are safe.
Does this mean mac malware is going to become endemic? Yes. If no one is running anti-virus, then there is nothing to clean up infected systems beyond end-of-life hardware replacement. Given the state of the economy and mac hardware longevity, that can take a very long time.
Does this mean we hit the mac malware tipping point? That I don't know. We can't say that we have reached the mac malware tipping point unless we come up with a definition for the tipping point itself. Dino Dai Zovi and I have been kicking around a potential "warning sign" that, when seen, indicates we are now in the mac malware epidemic state. Our current preferred indicator is the emergence of websites that perform drive-by exploits of the browser to install botnet-controllable malware, regardless if the exploit is a zero-day attack or not. In other words, when we see what happens every day on the PC side happen once on the Mac side, then we all need to run out and buy anti-virus software.
Some time ago you predicted that mac malware would hit its tipping point at 15%. Does this mean you are wrong? Well, my prediction was based on the difficulty to attack a PC versus the market share of a Mac. I assumed that the difficult of attacking a PC was strictly defined by the effectiveness of current anti-virus products on a new piece of malware. My back-of-the-envelope estimate put an attacker's success rate at compromising a PC at around 20%, which meant that Macs would have to around 16% market share before they attract the attention of serious malware authors. If the real success rate of an attacker is lower, then you should expect a mac malware epidemic far earlier. So the answer is: maybe I'm wrong, but I don't know yet.
In short, the story for mac malware hasn't changed this week contrary to popular opinion. However, as both users and as information security professionals, we need to remain vigilant and watch for the tipping point in mac malware, and use that as the trigger to install Mac AV software.
April 22, 2009
Breaking down the "electric grid is vulnerable" stories.
We have been seeing an increasing number of stories on the vulnerability of our electric grid to outside attackers, but determining whether or not these stories are legitimate is exceedingly difficult. The reports are, understandably, short on facts and real metrics and long on anonymous quotes, speculation, and recriminations from the various involved parties. We may not be able to discern what the true nature of the threat against our power grid is, but we can figure out what are the right questions to ask so we can cast a more critical eye to the various news reports.
When the media claims that the electric grid is compromised out the wazoo, it is important to know what exactly is compromised. We can break down the target systems into two classes, specifically non-critical and critical. The non-critical systems consist of desktops and laptops belonging to the administrative, operational, and executive staff of the firm. Anyone who provides statistics showing the percentage of total systems that are known to be compromised at a power plant is likely only providing statistics on these non-critical systems. It would be foolish to suspect that these figures are going to be any different than any other similarly-sized enterprise. Also, while the number of compromised non-critical systems is a proxy indicator for the general security posture of the firm, but it does not tell us anything concrete about the other class of systems.
The far more important question is how many of the systems that are directly attached to industrial hardware are compromised. A compromise of a desktop or a server that is connected to a controller or a process control monitor could directly lead to blackouts and equipment destruction. Remotely enumerating these critical systems is extremely difficult, and determining their level of compromise without the explicit support of the power industry is almost impossible. Therefore, getting a third-party verification of the "power systems are compromised" story is not achievable at this time.
I am not saying that the power grid is secure or insecure. I am saying, however, that we must cast a critical eye to these stories to make sure we don't fall victim to the fear-mongering that permeates all too many security stories.
April 27, 2009
On assuming that you are owned.
Security professionals made a comment at last week's RSA that organizations should assume that they are currently owned by an outside attacker. While this may strike some as paranoia, it is a good assumption for minimizing impact in the event of a serious compromise.
For both individuals and businesses, determining the impact of getting owned begins with listing all the things that you use that are own-able, and then determining a risk mitigation strategy, a containment strategy, and a recovery strategy for each system. These all boil down to a series of "what if" questions that anyone can think through. For the average user, the set of systems that can be compromised includes, but is not limited to, all physical systems, backup mechanisms, and hosted services like e-mail and social networks.
We start from the most "distant" system inwards -- the hosted services. How can an individual's hosted accounts become compromised? The easiest way an attacker could compromise your account is through weak passwords or by sniffing passwords off the wire; therefore, we can reduce the risk of compromise by using strong passwords for our accounts and not accessing them from public access terminals and insecure wireless networks. If you are using a weak password on one site, it is entirely likely you are using a weak password elsewhere. Preventing the attacker from hopping from one hosted account to another can be as simple as using a strong and unique password on every site you access. It isn't just access to the data we should be concerned about. If the service is compromised, it is possible that everything in the account could be deleted, in which case having a backup of, say, all blog posts and all e-mail transactions would be required to get back up and running.
Let's say that the attacker has moved beyond our hosted account and either remotely compromised our physical system or actually stolen the hardware. In both cases, we should expect that all of our unencrypted data is accessible to the world. Both scenarios necessitate file-by-file encryption and a combination of physically secured on-site or off-site backups. A remote compromise would be a far worse situation: even though you don't lose the hardware, the attacker has the opportunity to capture passwords used for hosted services as well as financial accounts. The only way to limit your exposure here is to use cryptographic key fobs (like a SecureID token) and hope they aren't controlling the entire session.
Ultimately the only way to minimize the impact of a compromise is to assume that all of your data is compromised and consequently reduce the amount of data you either keep accessible to content that would not be devastating if it was leaked. In other words, never commit anything to bytes that you don't want your spouse, children, parents, or coworkers to see; the data may only be a single attack away from leaking out into the ether.
April 30, 2009
*cough* Have to work from home *sneeze*?
Not that there is any reason to say this, but it is possible that a significant portion of the workforce will be either absent or working from home in the next few months. This could mean opening up the corporate network to far larger numbers of telecommuters whose systems may be in various states of security disrepair. IT managers should be planning on how give secure access to the corporate network to a batch of relatively untrained employees.
If you don't work in the IT department, the story is pretty simple. Get your laptop set up to connect to your work network if it cannot do so already. Laptops that are primarily home systems should be reformatted and installed from scratch if there is any concern that the machine may contain malware; just because you aren't going to work sick doesn't mean your system should.
For those of you who do work in the IT department, well, I don't envy the job ahead of you. If your network wasn't de-perimeterized before, it will be soon, whether you like it or not. Not only do you need to prep employees' personal systems to connect to the corporate infrastructure, you also need to educate them on the risks of bringing a relatively-unclean personal system into the corporate environment. Given that home systems are not nearly as well looked-after as corporate systems, you also are going to be dealing with all the infections that your employee's home PCs will be bringing past the firewall and NAT systems and into the core network.
There aren't too many recommendations I can make that aren't common sense. For example, you can distribute more laptops to employees who don't have them. Also, you should consider extending the corporate licenses for the anti-virus products to the home systems of employees who do not possess a company-managed PC but will be expected to work remotely.
Plans similar to the one described above should be in the dusty business continuity plans that many organizations created in late 2001. It's time to update them and get ready to put them to practice.
May 5, 2009
Phishing on social networks is no real surprise
For some reason users of social networks appear surprised by the rate at which phishing attacks are appearing on social networks like Facebook. There is the belief among computer users that they can run from one platform, like e-mail, to the next platform, like social networking, to escape preexisting security problems. Much like social problems in the real world, movement to a new electronic location will provide only a temporary respite from endemic social ills. Rather than allowing their population to depart due to a perception of a lack of security, social networks need to make a two pronged attack at reducing their users vulnerability to phishing attacks.
The first prong consists of attempting to improve issues at what is known as the "layer 8", or the human interaction layer. This consists of giving users clues as to what is good content and what is questionable content. For example, social networks can warn users when they are leaving the safety of the network's walled garden and are clicking on a link that has not been explicitly vetted. They can also alert users when there is an increased risk of phishing or malware attacks based upon recent activity, and make this indicator a predominant UI element that appears when links are activated.
The second prong involves the continual improvement of technology for the prevention of in-network phishing attacks. All of the major players have a security team that is already in place to address issues as they come up. Truth be told, these guys are actually doing a pretty decent job as it is right now. These teams are far more empowered to fix problems in their network than you will ever see in almost every other part of the computing world. They have complete control of the internal architecture, and are not bound by standards bodies on how they handle messaging or communication between those systems. Social networks have been able to combat abuse mostly by taking full advantage of all the information they have at their disposal regarding their users, including the IP address they are connecting from and a full record of their behavior inside the network. Nevertheless, phishing is a hard problem, and several of the social networks are going down the path of employing third-party solutions to address the issue.
Without a combination of user education and appropriate technology, participants will end up moving to location to location in search of a completely abuse free environment. Much like many of the problems that society faces, however, the residents of a social network are enabling the attackers to take advantage of them, and as a result make it far more difficult to eliminate the problem. If individuals didn't fall for phishing attacks, then the phishers would leave the platform altogether. Sadly, once phishers' appetites have been whet by a few successes, they are unlikely to depart anytime soon.
January 6, 2011
Immunet Acquired by Sourcefire
I haven't been blogging much. I have been busy. |
proofpile-shard-0030-77 | {
"provenance": "003.jsonl.gz:78"
} | 1. I started with doing tuitions , even after I picked my first work. Being in IT, I always had 5 days a week schedule but tuition/coaching is a time-tested way to earn clean money. I was teaching Mathematics to class X people. And if your pupil do good, like what happened when I was teaching this lady (in 1995) whose parents have given up on her, she was a in a plush school and I don't know what worked, she got such good marks that they hunted me down for a big pack of sweets after her X board exam, then that is extremely rewarding. You can start from your home, do evening class then move to a rented place and so on. It is very tiring but as I said, noone would short-change a teacher.
India is one of the largest centres for polishing diamonds and gems and manufacturing jewellery; it is also one of the two largest consumers of gold.[183][184] After crude oil and petroleum products, the export and import of gold, precious metals, precious stones, gems and jewellery accounts for the largest portion of India's global trade. The industry contributes about 7% of India's GDP, employs millions, and is a major source of its foreign-exchange earnings.[185] The gems and jewellery industry, in 2013, created ₹251,000 crore (US$35 billion) in economic output on value-added basis. It is growing sector of Indian economy, and A.T. Kearney projects it to grow to ₹500,000 crore (US$70 billion) by 2018.[186]
My personal finance blogs were started with $100, but you can start a blog with$20 if you buy hosting on a monthly basis. That’s 4 Starbucks coffees or 4 packs of cigarettes many paycheck to paycheck people do find a way to buy. After six months of HARD work, my first site started generating $2,000 a month, and today, those three sites generate over$5,000 a month, while all I have put in was hosting for $100-ish every year each, and a website redesign for under$1,000 after three years. Freelance writing and translation jobs are also a sizable part of my income that did not require any upfront investment. Investing $10 a month in index funds is also a realistic way for many to build yet another income stream. I run several online businesses now (all it takes to start one is a domain, hosting, and maybe incorporation). There are two notable ones. The first is meal plan membership site called$5 Meal Plan that I co-founded with Erin Chase of $5 Dinners. The second is the umbrella of blogs I run, including this one and Scotch Addict. They pay me ordinary income as well as qualified distributions since I'm a partner. "Create your own products and services," financial blogger and life coach Michael Tamez tells Bustle. "Any company you could ever work for can replace you at any time. However, your individuality can never be replaced! I encourage you to explore your creative talents and abilities more. What are you good at? How can you monetize that talent and perhaps even build a business out of it? Have you experienced something extraordinary in your life, and because of it, have tons of knowledge and wisdom to share? ... Creating your own products and services can create steady streams of residual income, which pays you continuously, even when you're not working. Essentially, you bust your ass once and get paid for the rest of your life — even when you're sitting on the beach, sipping coconut rum. Just remember this: When you invest in your creative abilities, you become irreplaceable!" 2) Find Out What You Are Good At. Everybody is good at something, be it investing, playing an instrument, playing a sport, communications, writing, art, dance and so forth. You should also list several things that interest you most. If you can combine your interest plus expertise, you should be able to monetize your skills. A tennis player can teach tennis for$65 an hour. A writer can pen her first novel. A finance buff can invest in stocks. A singer can record his first song. The more interests and skills you have, the higher chance you can create something that can provide passive income down the road.
### How do you do this? Well, try to get the highest paying job you can! Ask for a raise! Utilize services, such as Glassdoor.com, to see how your salary competes with others in your same job. Some companies really force employees to leave to get a raise, and then come back for another raise. This industry jumping promotional strategy is very common and could work.
The information technology (IT) industry in India consists of two major components: IT services and business process outsourcing (BPO). The sector has increased its contribution to India's GDP from 1.2% in 1998 to 7.5% in 2012.[213] According to NASSCOM, the sector aggregated revenues of US$147 billion in 2015, where export revenue stood at US$99 billion and domestic at US$48 billion, growing by over 13%.[213] However, this comes back to the old discussion of pain versus pleasure. We will always do more to avoid pain than we will to gain pleasure. When our backs are against the wall, we act. When they're not, we relax. The truth is that the pain-versus-pleasure paradigm only operates in the short term. We'll only avoid pain in the here and now. Often not in the long term. Petroleum products and chemicals are a major contributor to India's industrial GDP, and together they contribute over 34% of its export earnings. India hosts many oil refinery and petrochemical operations, including the world's largest refinery complex in Jamnagar that processes 1.24 million barrels of crude per day.[171] By volume, the Indian chemical industry was the third-largest producer in Asia, and contributed 5% of the country's GDP. India is one of the five-largest producers of agrochemicals, polymers and plastics, dyes and various organic and inorganic chemicals.[172] Despite being a large producer and exporter, India is a net importer of chemicals due to domestic demands.[173] If you want to really start tracking your finances, and I mean not just your spending but your investing (that's where wealth is built), give Personal Capital a look. They will give you a$20 Amazon gift card if you link up an investment account that has $1,000+. No strings. It's a cornerstone of my financial system and I think you owe yourself a look. 100% free too. Real Estate Crowdsourcing – After selling my SF rental house in mid-2017 for 30X annual gross rent, I reinvested$550,000 of the proceeds ($810,000 total) in real estate crowdfunding, based in San Francisco. My goal is to take advantage of cheaper heartland real estate with much higher net rental yields (8% – 12% vs. 2% – 3.5% in SF) and diversify away from expensive coastal city real estate which is now under pressure due to new tax policy which limits SALT deduction to$10,000 and new mortgage interest deduction on mortgages of $750,000 from$1,000,000 for 2018 and beyond.
One of the benefits of the time we live in is all the software and technology we have available. If you want to scale a business that’s bigger than yourself, you’re going to need systems in place to get you there. These systems should involve automating as much as you can. The less involvement of you in the day-to-day means you have time to focus on the big picture strategies that help your business grow.
The five-year plans, especially in the pre-liberalisation era, attempted to reduce regional disparities by encouraging industrial development in the interior regions and distributing industries across states. The results have been discouraging as these measures increased inefficiency and hampered effective industrial growth.[397] The more advanced states have been better placed to benefit from liberalisation, with well-developed infrastructure and an educated and skilled workforce, which attract the manufacturing and service sectors. Governments of less-advanced states have tried to reduce disparities by offering tax holidays and cheap land, and focused on sectors like tourism which can develop faster than other sectors.[398][399] India's income Gini coefficient is 33.9, according to the United Nations Development Program (UNDP), indicating overall income distribution to be more uniform than East Asia, Latin America and Africa.[10]
# Self-Employed Income: This is a big shot played by you, where every shot is decided and executed by you. Where the success of your work is determined and expressed by your belief and performances. Yes, there are some demerits of the same for suppose you are sick or want a vacation; still you have to run the show and cannot be completely unavailable for a long time from your establishment.
What if there was a way for you to effectively make money while you sleep? Sounds like a dream come true, right? Even for the biggest workaholics, there are only so many hours in a day. If only you could get paid multiple times for something you did once—that’s exactly how passive income works! Thanks to technology, the potential to create multiple income streams is even easier than ever before. We’re no longer held back by the limitations of a traditional 9-to-5 job, and financial freedom is at our fingertips. Even if you already work a full-time job you can still improve your financial health with passive income.
When withdrawing money to live on, I don’t care how many stock shares I own or what the dividends are – I care about how much MONEY I’m able to safely withdraw from my total portfolio without running out before I die. A lot of academics have analyzed total market returns based on indices and done Monte Carlo simulations of portfolios with various asset allocations, and have come up with percentages that you can have reasonable statistical confidence of being safe.
I’ve been into home décor lately and I had to turn to Etsy to find exactly what I wanted. I ended up purchasing digital files of the artwork I wanted printed out! The seller had made a bunch of wall art, digitized, and listed it on Etsy for instant download. There are other popular digital files on Etsy as well such as monthly planners. If you’re into graphic design this could be an amazing passive income idea for you.
Whether you take a “distribution” (aka free-cash-flow) in the form of a dividend, interest payment, capital gain, maturing ladder of a CD, etc, you are still taking the same amount of cash out of your portfolio. Don’t fall for the trap of sub optimizing your overall portfolio’s performance because your chasing some unimportant trait called “income”.
If social capital is expecting to benefit from a share of the human capital of others, there may not be enough “others” in future generations to contribute to the well being of the current generation. It is interesting to speculate about how much guaranteed income the economy can be expected to support at any given time and for what categories of people.
Many people like writing blogs but only a few know that it can fetch you money as well. You can sign up with different companies as a promoter (or an affiliate) to promote their certain product or services on your blogs and websites. The payment method can be either a flat fee or a percentage of the amount of the sale completed, depending on your agreement.
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good. |
proofpile-shard-0030-78 | {
"provenance": "003.jsonl.gz:79"
} | 2019N1105
11̑SZW
@{́A11̑SZW܂B́A3ɊJÂsyɊwZ\ďoꂷlN̔\܂Bt炵ASZ傫Ȕ肪܂B
Z搶̂b
SN̔\
ySZ̍ŐVLz
posted by ac at 20:27 | SZ
2019N1102
CwsL8
yЂƎƂԂɉ߂܂BԂ̌o߂Ƃ́Â܂B
́Â݂₰b̉Ԃ炩Ă鍠ł傤B
͂xŁÅwтɔĂˁB
CwsLAɂĂ܂B
{Ă݂ȂA肪Ƃ܂B
posted by ac at 00:53 | @UN
2019N1101
CwsL7
yЂƎAƂԂɉ߂Ă܂B
wZ܂ŁA炭܂B
́A݂₰bɉԂ炭Ƃł傤I
qǂ̋AA炭҂B
posted by ac at 13:28 | @UN
2019N1101
CwsL6
O[hŗVԎqǂ́AȂƖCȂƁB
݂ȎvVł܂B
posted by ac at 09:55 | @UN
2019N1101
ԒdÂ
@6ŃACwsɏoĂ܂B́Aō̎OO[hałB
@wZł́AHtɌẲԒdÂ肪sĂ܂BZ搶Ǘ@gĉԒdkĂĂ܂BsXی쏗܂Ǘ@劈Ă܂B
ŁAԂJ̐]wZɂȂ܂B
posted by ac at 09:16 | ]wZ
2019N1101
CwsL5
͂悤܂I
̋o߂ʂ܂܁ACɒ}Ă܂B
C߂Ƃ͈߂āA璩H܂B
posted by ac at 07:15 | @UN
2019N1031
CwsL4
ɃzeɓAוقǂĂH܂B
yH|Xʼn€Ay[HƂȂ܂B
̌́AeŃerςQ[肵āAk̂ЂƎłB
posted by ac at 19:25 | @UN
2019N1031
CwsL3
BقցB
̓Wi͈ɎIf炵WiłBBe֎~̂ŏЉłȂ̂cOI
݂ȑ喞̗lqłB
posted by ac at 15:39 | @UN
2019N1031
CwsL2
HAɕ{ւwɍs܂B
ꂼɂ낢Ȏv悤łB
ɂ́Auꂳ邩AlɍiFBl̂߂ɁAMĂꐶĂĂBĺAAABvĂė܂oĂ܂B
Ȏvl܂CwsłB
݂ȁACłB
posted by ac at 14:08 | @UN
2019N1031
CwsL1@̂̂є
̂̂єقŊwł܂B̂̐Aj̐AĉȊw̐Ɉ|Ă݂ȋÁXłB
̏o͋CƈЗ͂͐łˁB
posted by ac at 10:35 | @UN |
proofpile-shard-0030-79 | {
"provenance": "003.jsonl.gz:80"
} | ## Getting number of lines of printf
Questions about the LÖVE API, installing LÖVE and other support related questions go here.
Forum rules
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm
### Getting number of lines of printf
If I am using printf for wordwrap there is no way to accurately determine how many lines a given string will be or how many vertical pixels it will take. I'd like to request this feature be added to future versions of Love.
nevon
Commander of the Circuloids
Posts: 938
Joined: Thu Feb 14, 2008 8:25 pm
Location: Stockholm, Sweden
Contact:
### Re: Getting number of lines of printf
There is a very cumbersome way.
Code: Select all
font = love.graphics.newFont()
str = "This is a very long string that will surely be split over several lines."
maxwidth = 100
lines = math.ceil(font:getWidth(str)/maxwidth)
vspace = font:getHeight(str)*lines
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm
### Re: Getting number of lines of printf
The method you have is similar to the one I'm using and both ways work poorly. They are only estimates and are unsatisfactory.
When you estimate like this you frequently get overestimating on some strings which in my case means boxes around text having varying amounts of ugly white space at the end.
nevon
Commander of the Circuloids
Posts: 938
Joined: Thu Feb 14, 2008 8:25 pm
Location: Stockholm, Sweden
Contact:
### Re: Getting number of lines of printf
Yes, I agree. Text handling is quite poor, but at least it's something to get you by for now. If the issue tracker was up, you could file a feature request.
Luiji
Party member
Posts: 396
Joined: Mon May 17, 2010 6:59 pm
### Re: Getting number of lines of printf
My recommendation: when you have to wrap text like that, use a monospace font. It's also good if you are trying to make your game look retro.
Good bye.
Elvashi
Prole
Posts: 45
Joined: Sat Jul 04, 2009 9:17 am
Location: Australia
### Re: Getting number of lines of printf
I'm fairly certain I have FRed it in the past, and monoscape is not a solution, at best its a hack*.
Font objects have getWidth(string), which can be used to get the length of a string, it seems safe to assume that love is using the basic "wrap when too long" method, which should be easy enough to implement.
* Unless, ofcause, you want to use a monoscape font in the first place.
"We could make a program for doing this for you, but that is for the LÖVE IDE, planned to be released in March 2142." ~mike
Networking with UDP uLove Proposal CCG: Gangrene
bartbes
Sex machine
Posts: 4946
Joined: Fri Aug 29, 2008 10:35 am
Location: The Netherlands
Contact:
### Re: Getting number of lines of printf
I just checked the algorithm, whenever it is over the wrap limit it goes back to the last space character and inserts a line break there, if there is no space it just continues on the line. Hope this helps.
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm
### Re: Getting number of lines of printf
It is technically possible to predict the lines, but by doing so you are essentially recreating the entire word wrap function which is a huge waste.
Last edited by Lap on Sat Jun 12, 2010 11:31 am, edited 1 time in total.
bartbes
Sex machine
Posts: 4946
Joined: Fri Aug 29, 2008 10:35 am
Location: The Netherlands
Contact:
### Re: Getting number of lines of printf
Well, I guess I can add a function to calculate it, should I consider this a feature request?
Lap
Party member
Posts: 256
Joined: Fri Apr 30, 2010 3:46 pm |
proofpile-shard-0030-80 | {
"provenance": "003.jsonl.gz:81"
} | 3,964 views
The number of arrangements of six identical balls in three identical bins is _____________ .
I am getting 217 as the answer for “6 distinct in 3 identical”
This is $\color{red}{\text{ “indistinguishable objects into indistinguishable boxes” }}$ problem which is a standard combinatorial problem. There is no simple closed formula for the number of ways to distribute $n$ indistinguishable objects into $j$ indistinguishable boxes.
So, We will enumerate all the ways to distribute.
Best way is to go in a sequence, covering all possibilities, So, that we do not overcount, we do not undercount.
Case 1 : $\color{blue}{\text{If only one bin is used : }}$
(6,0,0) i.e. Only 1 way (Since All balls have to be put in this single bin that is used ; Since all bins are identical, so, it doesn’t matter which bin we use)
Case 2 : $\color{blue}{\text{If two bins are used : }}$
The distribution can be done as any of the following :
(5,1) (which means 5 identical balls in one bin, 1 ball in another bin, and the third bin is unused)
(4,2) (which means 4 identical balls in one bin, 2 balls in another bin, and the third bin is unused)
(3,3) (which means 3 identical balls in one bin, 3 balls in another bin, and the third bin is unused)
i.e. 3 ways to distribute 6 identical balls into 3 identical bins if exactly two of the bins are used.
Case 3 : $\color{blue}{\text{If three bins are used : }}$
The distribution can be done as any of the following :
(4,1,1) (which means 4 identical balls in one bin, 1 ball in another bin, and 1 ball in the third bin)
(3,2,1) (which means 3 identical balls in one bin, 2 ball in another bin, and 1 ball in the third bin)
(2,2,2) (which means 2 identical balls in one bin, 2 ball in another bin, and 2 ball in the third bin)
i.e. 3 ways to distribute 6 identical balls into 3 identical bins if all three of the bins are used.
So, total we have 7 ways to distribute 6 identical balls in 3 identical bins.
NOTE that we cannot distribute as (2,2,1,1) because only three bins are available, not four.
This problem is type of Identical Balls and Identical Bins :
given that, 3 Bins and 6 balls.
1. 6,0,0
2. 5,1,0
3. 4,2,0
4. 4,1,1
5. 3,3,0
6. 3,2,1
7. 2,2,2
Only these arrangements are possible.
In the given question ball as well as Bin are similar means looking similar
so here in given question
six identical Ball and three identical Bins
so lets case first when all Bins are not empty
Bin Bin Bin
4 1 1 3 2 1 2 2 2
case second when only one bins is empty
Bin Bin Bin
5 1 empty 4 2 empty 3 3 empty
lets case third when only two bins are empty
Bin Bin Bin
6 empty empty
so overall arrangement which is looking different is 3+3+1=7 |
proofpile-shard-0030-81 | {
"provenance": "003.jsonl.gz:82"
} | ## May 21, 2015
### Titus Brown
#### Comparing and evaluating assembly with graph alignment
One of our long-term interests has been in figuring out what the !$!$!#!#%! assemblers actually do to real data, given all their heuristics. A continuing challenge in this space is that short-read assemblers deal with really large amounts of noisy data, and it can be extremely hard to look at assembly results without running into this noise head-on. It turns out that being able to label De Bruijn graphs efficiently and align reads to graphs can help us explore assemblies in a variety of ways.
The two basic challenges are noisy data and lots of data. When (for example) looking at what fraction of reads has been incorporated into an assembly, noise causes problems because a read may have been corrected during assembly. This is where graph alignment comes in handy, because we can use it to align reads to the full graph and get rid of much of this noise. Lots of data complicates things because it's very hard to look at reads individually - you need to treat them in aggregate, and it's much easier just to look at the reads that match to your assembly than to investigate the oddball reads that don't assemble. And this is where the combination of graph alignment and labeling helps, because it's easy to count and extract reads based on overlaps with labels, as well as to summarize those overlaps.
The main question we will be asking below is: can we measure overlaps and disjoint components in graph extents, that is, in unique portions of assembly graphs? We will be doing this using our sparse graph instead of counting nodes or k-mers, for two reasons: first, the covering is largely independent of coverage, and second, the number of sparse nodes is a lot smaller than the total number of k-mers.
The underlying approach is straightforward:
• load contigs or reads from A into the graph, tagging sparse nodes as we go;
• load contigs or reads from B into the graph, tagging sparse nodes as we go;
• count the number of tagged nodes that are unique to A, unique to B, and in the overlap;
• optionally do graph alignment as you load in reads, to ignore errors.
## Some basics
Let's start with simulations, as usual. We'll set up two randomly generated chromosomes, a and b, of equal size, both in genomes.fa, and look at genome-a extent within the context of both (target 'fake_a' in Makefile):
./compare-graphs.py genomes.fa genome-b.fa
all tags: 52
n tags in A: 52
n tags in B: 26
tags in A but not in B 26
tags in B but not in A 0
So far so good -- there's a 50% overlap between one of the chromosomes and the total.
If we now generate reads from genome-b.fa and do the graph comparison with the reads, we get silly results (target 'fake_b' in Makefile):
./compare-graphs.py genomes.fa reads-b.fa
all tags: 135
n tags in A: 109
n tags in B: 107
tags in A but not in B 28
tags in B but not in A 26
Despite knowing by construction that all of the reads came from genome-b, we're getting results that there's a lot of tags in the reads that aren't in the genome. This is because of errors in the reads, which introduce many spurious branches in the graph.
This is now where the read aligner comes in; we can do the same comparison, but this time we can ask that the reads be aligned to the genome, thus eliminating most of the errors in the comparison:
./compare-graphs.py genomes.fa reads-b.fa --align-b
all tags: 99
n tags in A: 99
n tags in B: 72
tags in A but not in B 27
tags in B but not in A 0
At this point we can go in and look at the original tags in A that aren't covered in B (there are 52) and note that B is missing approximately half of the graph extent in A.
## Trying it out on some real data
Let's try evaluating a reference against some low-coverage reads. Using the same mouse reference transcriptome & subset of reads that we've been using in previous blog posts, we can ask "how many sparse nodes are unaccounted for in the mouse transcriptome when we look at the reads?" (Note, the mouse transcriptome was not generated from this data set; this is the reference transcriptome.)
The answer (target rna-compare-noalign.txt in the Makefile) is:
all tags: 1959121
n tags in A: 1878475
n tags in B: 644963
tags in A but not in B 1314158
tags in B but not in A 80646
About 12.5% of the reads in (B; 80646 / 644963) don't pick up tags in the official reference transcriptome (A).
Interestingly, the results with alignment are essentially the same (target rna-compare-align.txt):
all tags: 1958219
n tags in A: 1877685
n tags in B: 643655
tags in A but not in B 1314564
tags in B but not in A 80534
suggesting that, by and large, these reads are disjoint from the existing assembly, and not mere sequencing errors. (This may be because we require that the entire read be mappable to the graph in order to count it, though.)
## Evaluating trimming
One of the interesting questions that's somewhat hard to investigate in terms of transcriptome assembly is, how beneficial is read trimming to the assembly? The intuition here (that I agree with) is that generally sequence trimming lowers the effective coverage for assembly, and hence loses you assembled sequence. Typically this is measured by running an assembler against the reads, which is slightly problematic because the assembler could have all sorts of strange interactions with the trimming.
So, can we look at the effect of trimming in terms of sparse nodes? Sure!
Suppose we do a stringent round of trimming on our RNAseq (Trimmomatic SLIDINGWINDOW:4:30) - what do we lose?
On this low coverage data set, where A is the graph formed from the trimmed reads and B is the graph from the raw reads, we see (target rseq-hardtrim-ba-noalign.txt):
all tags: 588615
n tags in A: 518980
n tags in B: 588615
tags in A but not in B 0
tags in B but not in A 69635
we see about 12% of the sparse nodes missing from the trimmed data.
If we run the read aligner with a low coverage cutoff (target rseq-hardtrim-ba-align1.txt), we see:
all tags: 569280
n tags in A: 519396
n tags in B: 561757
tags in A but not in B 7523
tags in B but not in A 49884
Basically, we recover about 20,000 tags in B (69,635 - 49,884) with alignment vs exact matches, so a few percent; but we also lose about half that (7,500) for reasons that we don't entirely understand (wiggle in the graph aligner?)
We have no firm conclusions here, except to say that this should be a way to evaluate the effect of different trimming on graph extent, which should be more reliable than looking at the effect on assemblies.
## Notes and miscellany
• There is no inherent coverage model embedded here, so as long as we can correct for the density of tags, we can apply these approaches to genomes, metagenomes, and transcriptomes.
• It's actually very easy to extract the reads that do or don't match, but our current scripts don't let us do so based on labels.
• We aren't really using the labeling here, just the tagging - but labeling can enable n-way comparisons between e.g. different assemblies and different treatments, because it lets us examine which tags show up in different combinations of data sets.
### Appendix: Running this code
The computational results in this blog post are Rather Reproducible (TM). Please see https://github.com/dib-lab/2015-khmer-wok5-eval/blob/master/README.rst for instructions on replicating the results on a virtual machine or using a Docker container.
### Continuum Analytics
#### Conda for Data Science
tl; dr: We discuss how data scientists working with Python, R, or both can benefit from using conda in their workflow.
Conda is a package and environment manager that can help data scientists manage their project dependencies and easily share environments with their peers. Conda works with Linux, OSX, and Windows, and is language agnostic, which allows us to use it with any programming language or even multi-language projects.
This post explores how to use conda in a multi-language data science project. We’ll use a project named topik, which combines Python and R libraries, as an example.
## May 20, 2015
### Titus Brown
#### Labeling a sparse covering of a De Bruijn graph, and utility thereof
So far, in this week of khmer blog posts (1, 2, 3), we've been focusing on the read-to-graph aligner ("graphalign"), which enables sequence alignments to a De Bruijn graph. One persistent challenge with this functionality as introduced is that our De Bruijn graphs nodes are anonymous, so we have no way of knowing the sources of the graph sequences to which we're aligning.
Without being able to label the graph with source sequences and coordinates, we can't do some pretty basic things, like traditional read mapping, counting, and variant calling. It would be nice to be able to implement those in a graph-aware manner, we think.
To frame the problem, graphalign lets us query into graphs in a flexible way, but we haven't introduced any way to link the matches back to source sequences. There are several things we could do -- one basic idea is to annotate each node in the graph -- but what we really want is a lightweight way to build a labeled graph (aka "colored graph" in Iqbal parlance).
This is where some nice existing khmer technology comes into play.
## Partitioning, tagging, and labelhash
Back in 2012, we published a paper (Pell et al., 2012) that introduced a lightweight representation of implicit De Bruijn graphs. Our main purpose for this representation was something called "partitioning", in which we identified components (disconnected subgraphs) of metagenome assembly graphs for the purpose of scaling metagenome assembly.
A much underappreciated part of the paper is buried in the Materials,
For discovering large components we tag the graph at a minimum density by using the underlying reads as a guide. We then exhaustively explore the graph around these tags in order to connect tagged k-mers based on graph connectivity. The underlying reads in each component can then be separated based on their partition.
The background is that we were dealing with extremely large graphs (30-150 billion nodes), and we needed to exhaustively explore the graphs in order to determine if any given node was transitively connected to any other node; from this, we could determine which nodes belonged to which components. We didn't want to label all the nodes in the graph, or traverse from all the nodes, because this was prohibitive computationally.
### A sparse graph covering
To solve this problem, we built what I call a sparse graph covering, in which we chose a subset of graph nodes called "tags" such that every node in the graph was within a distance 'd' of a tag. We then used this subset of tags as a proxy for the graph structure overall, and could do things like build "partitions" of tags representing disconnected components. We could guarantee the distance 'd' by using the reads themselves as guides into the graph (Yes, this was one of the trickiest bits of the paper. ;)
Only later did I realize that this tagging was analogous to sparse graph representations like succinct De Bruijn graphs, but that's another story.
The long and short of it is this: we have a nice, simple, robust, and somewhat lightweight way to label graph paths. We also have functionality already built in to exhaustively explore the graph around any node and collect all tagged nodes within a given distance.
What was missing was a way to label these nodes efficiently and effectively, with multiple labels.
### Generic labeling
Soon after Camille Scott, a CS graduate student at MSU (and now at Davis), joined the lab, she proposed an expansion to the tagging code to enable arbitrary labels on the tags. She implemented this within khmer, and built out a nice Python API called "labelhash".
With labelhash, we can do things like this:
lh = khmer.CountingLabelHash(...)
lh.consume_fasta_and_tag_with_labels(sequence_file)
and then query labelhash with specific sequences:
labels = lh.sweep_label_neighborhood(query, dist)
where 'labels' now contains the labels of all tags that overlap with 'query', including tags that are within an optional distance 'dist' of any node in query.
Inconveniently, however, this kind of query was only useful when what you were looking for was in the graph already; it was a way to build an index of sequences, but fuzzy matching wasn't possible. With the high error rate of sequencing and high polymorphism rates in things we worked on, we were worried about its poor effectiveness.
### Querying via graphalign, retrieving with labelhash
This is where graphalign comes in - we can query into the graph in approximate ways, and retrieve a path that's actually in the graph from the query. This is essentially like doing a BLASTN query into the graph. And, combined with labelhash, we can retrieve the reference sequence(s) that match to the query.
This is roughly what it looks like, once you've built a labelhash as above. First, run the query:
aligner = khmer.ReadAligner(lh.graph, trusted_coverage, 1.0)
score, graph_path, query_path, is_truncated = aligner.align(query)
and then retrieve the associated labels:
labels = lh.sweep_label_neighborhood(graph_path)
...which you can then use with a preexisting database of the sequence.
### Why would you do any of this?
If this seems like an overly complicated way of doing a BLAST, here are some things to consider:
• when looking at sequence collections that share lots of sequence this is an example of "compressive computing", in which the query is against a compressed representation of the database. In particular, this type of solution might be good when we have many, many closely related genomes and we want to figure out which of them have a specific variant.
• graphs are notoriously heavyweight in general, but these graphs are actually quite low memory.
• you can do full BLASTX or protein HMM queries against these graphs as well. While we haven't implemented that in khmer, both a BLAST analog and a HMMER analog have been implemented on De Bruijn graphs.
• another specific use case is retrieving all of the reads that map to a particular region of an assembly graph; this is something we were very interested in back when we were trying to figure out why large portions of our metagenomes were high coverage but not assembling.
One use case that is not well supported by this scheme is labeling all reads - the current label storage scheme is too heavyweight to readily allow for millions of labels, although it's something we've been thinking about.
## Some examples
We've implemented a simple (and, err, somewhat hacky) version of this in make-index.py and do-align.py.
To see them in action, you'll need the 2015-wok branch of khmer, and a copy of the prototype (https://github.com/dib-lab/2015-khmer-wok4-multimap) -- see the README for full install instructions.
Then, type:
make fake
and you should see something like this (output elided):
./do-align.py genomes reads-a.fa
showing that we can correctly assign reads sampled from randomly constructed genomes - a good test case :).
### Assigning reads to reference genomes
We can also index a bunch of bacterial genomes and map against all of them simultaneously -- target 'ecoli' will map reads from E. coli P12B against all Escherichia genomes in NCBI. (Spoiler alert: all of the E. coli strains are very closely related, so the reads map to many references!)
It turns out to be remarkably easy to implement a counting-via-mapping approach -- see do-counting.py. To run this on the same RNAseq data set as in the counting blog post, run build the 'rseq.labelcount' target.
## Flaws in our current implementation
A few points --
• we haven't introduced any positional labeling in the above labels, so all we can do is retrieve the entire sequence around submatches. This is enough to do some things (like counting transcripts) but for many purposes (like pileups / variant calling via mapping) we would need to do something with higher resolution.
• there's no reason we couldn't come up with different tagging and labeling schemes that focus on features of interests - specific variants, or branch points for isoforms, or what have you. Much of this is straightforward and can be done via the Python layer, too.
• "labeled De Bruijn graphs" are equivalent in concept to "colored De Bruijn graphs", but we worry that "colored" is already a well-used term in graph theory and we are hoping that we can drop "colored" in favor of "labeled".
### Appendix: Running this code
The computational results in this blog post are Rather Reproducible (TM). Please see https://github.com/dib-lab/2015-khmer-wok4-labelhash/blob/master/README.rst for instructions on replicating the results on a virtual machine or using a Docker container.
## May 19, 2015
### Titus Brown
#### Abundance counting of sequences in graphs with graphalign
De Bruijn graph alignment should also be useful for exploring concepts in transcriptomics/mRNAseq expression. As with variant calling graphalign can also be used to avoid the mapping step in quantification; and, again, as with the variant calling approach, we can do so by aligning our reference sequences to the graph rather than the reads to the reference sequences.
The basic concept here is that you build a (non-abundance-normalized) De Bruijn graph from the reads, and then align transcripts or genomic regions to the graph and get the k-mer counts across the alignment. This is nice because it gives you a few options for dealing with multimapping issues as well as variation across the reference. You can also make use of the variant calling code to account for certain types of genomic/transcriptomic variation and potentially address allelic bias issues.
Given the existence of Sailfish/Salmon and the recent posting of Kallisto, I don't want to be disingenuous and pretend that this is any way a novel idea! It's been clear for a long time that using De Bruijn graphs in RNAseq quantification is a worthwhile idea. Also, whenever someone uses k-mers to do something in bioinformatics, there's an overlap with De Bruijn graph concepts (...pun intended).
What we like about the graphalign code in connection with transcriptomics is that it makes a surprisingly wide array of things easy to do. By eliminating or at least downgrading the "noisiness" of queries into graphs, we can ask all sorts of questions, quickly, about read counts, graph structure, isoforms, etc. Moreover, by building the graph with error corrected reads, the counts should in theory become more accurate. (Note that this does have the potential for biasing against low-abundance isoforms because low-coverage reads can't be error corrected.)
For one simple example of the possibilities, let's compare mapping counts (bowtie2) against transcript graph counts from the graph (khmer) for a small subset of a mouse mRNAseq dataset. We measure transcript graph counts here by walking along the transcript in the graph and averaging over k-mer counts along the path. This is implicitly a multimapping approach; to get results comparable to bowtie2's default parameters (which random-map), we divide out the number of transcripts in which each k-mer appears (see count-median-norm.py, 'counts' vs 'counts2').
This graph shows some obvious basic level of correlation, but it's not great. What happens if we use corrected mRNAseq reads (built using graphalign)?
This looks better - the correlation is about the same, but when we inspect individual counts, they have moved further to the right, indicating (hopefully) greater sensitivity. This is to be expected - error correction is collapsing k-mers onto the paths we're traversing, increasing the abundance of each path on average.
What happens if we now align the transcripts to the graph built from the error corrected reads?
Again, we see mildly greater sensitivity, due to "correcting" transcripts that may differ only by a base or two. But we also see increased counts above the main correlation, especially above the branch of counts at x = 0 (poor graph coverage) but with high mapping coverage - what gives? Inspection reveals that these are reads with high mapping coverage but little to no graph alignment. Essentially, the graph alignment is getting trapped in a local region. There are at least two overlapping reasons for this -- first, we're using the single seed/local alignment approach (see error correction) rather than the more generous multiseed alignment, and so if the starting point for graph alignment is poorly chosen, we get trapped into a short alignment. Second, in all of these cases, the transcript isn't completely covered by reads, a common occurrence due to both low coverage data as well as incomplete transcriptomes.
In this specific case, the effect is largely due to low coverage; if you drop the coverage further, it's even more exacerbated.
Two side notes here -- first, graphalign will align to low coverage (untrusted) regions of the graph if it has to, although the algorithm will pick trusted k-mers when it can. As such it avoids the common assembler problem of only recovering high abundance paths.
And second, no one should use this code for counting. This is not even a proof of concept, but rather an attempt to see how well mapping and graph counting fit with an intentionally simplistic approach.
## Isoform structure and expression
Another set of use cases worth thinking about is looking at isoform structure and expression across data sets. Currently we are somewhat at the mercy of our reference transcriptome, unless we re-run de novo assembly every time we get a new data set. Since we don't do this, for some model systems (especially emerging model organisms) isoform families may or may not correspond well to the information in the individual samples. This leads to strange-looking situations where specific transcripts have high coverage in one region and low coverage in another (see SAMmate for a good overview of this problem.)
Consider the situation where a gene with four exons, 1-2-3-4, expresses isoform 1-2-4 in tissue A, but expresses 1-3-4 in tissue B. If the transcriptome is built only from data from tissue A, then when we map reads from tissue B to the transcriptome, exon 2 will have no coverage and counts from exon 3 will (still) be missing. This can lead to poor sensitivity in detecting low-expressed genes, weird differential splicing results, and other scientific mayhem.
(Incidentally, it should be clear from this discussion that it's kind of insane to build "a transcriptome" once - what we really want do is build a graph of all relevant RNAseq data where the paths and counts are labeled with information about the source sample. If only we had a way of efficiently labeling our graphs in khmer! Alas, alack!)
With graph alignment approaches, we can short-circuit the currently common ( mapping-to-reference->summing up counts->looking at isoforms ) approach, and go directly to looking directly at counts along the transcript path. Again, this is something that Kallisto and Salmon also enable, but there's a lot of unexplored territory here.
We've implemented a simple, short script to explore this here -- see explore-isoforms-assembled.py, which correctly picks out the exon boundaries from three simulated transcripts (try running it on 'simple-mrna.fa').
### Other thoughts
• these counting approaches can be used directly on metagenomes as well, for straight abundance counting as well as analysis of strain variation. This is of great interest to our lab.
• calculating differential expression on an exonic level, or at exon-exon junctions, is also an interesting direction.
### References and previous work
• Kallisto is the first time I've seen paths in De Bruin graphs explicitly used for RNAseq quantification rather than assembly. Kallisto has some great discussion of where this can go in the future (allele specific expression being one very promising direction).
• There are lots of De Bruijn graph based assemblers for mRNAseq (Trinity, Oases, SOAPdenovo-Trans, and Trans-ABySS.
### Appendix: Running this code
The computational results in this blog post are Rather Reproducible (TM). Please see https://github.com/dib-lab/2015-khmer-wok3-counting/blob/master/README.rst for instructions on replicating the results on a virtual machine or using a Docker container.
### Matthew Rocklin
This work is supported by Continuum Analytics and the XDATA Program as part of the Blaze Project
tl;dr We lay out the pieces of Dask, a system for parallel computing
## Introduction
Dask started five months ago as a parallel on-disk array; it has since broadened out. I’ve enjoyed writing about its development tremendously. With the recent 0.5.0 release I decided to take a moment to give an overview of dask’s various pieces, their state, and current development.
## Collections, graphs, and schedulers
Dask modules can be separated as follows:
On the left there are collections like arrays, bags, and dataframes. These copy APIs for NumPy, PyToolz, and Pandas respectively and are aimed towards data science users, allowing them to interact with larger datasets. Operations on these dask collections produce task graphs which are recipes to compute the desired result using many smaller computations that each fit in memory. For example if we want to sum a trillion numbers then we might break the numbers into million element chunks, sum those, and then sum the sums. A previously impossible task becomes a million and one easy ones.
On the right there are schedulers. Schedulers execute task graphs in different situations, usually in parallel. Notably there are a few schedulers for a single machine, and a new prototype for a distributed scheduler.
In the center is the directed acyclic graph. This graph serves as glue between collections and schedulers. The dask graph format is simple and doesn’t include any dask classes; it’s just functions, dicts, and tuples and so is easy to build on and low-tech enough to understand immediately. This separation is very useful to dask during development; improvements to one side immediately affect the other and new developers have had surprisingly little trouble. Also developers from a variety of backgrounds have been able to come up to speed in about an hour.
This separation is useful to other projects too. Directed acyclic graphs are popular today in many domains. By exposing dask’s schedulers publicly, other projects can bypass dask collections and go straight for the execution engine.
A flattering quote from a github issue:
dask has been very helpful so far, as it allowed me to skip implementing all of the usual graph operations. Especially doing the asynchronous execution properly would have been a lot of work.
Dask developers work closely with a few really amazing users:
1. Stephan Hoyer at Climate Corp has integrated dask.array into xray a library to manage large volumes of meteorlogical data (and other labeled arrays.)
2. Scikit image now includes an apply_parallel operation (github PR) that uses dask.array to parallelize image processing routines. (work by Blake Griffith)
3. Mariano Tepper a postdoc at Duke, uses dask in his research on matrix factorizations. Mariano is also the primary author of the dask.array.linalg module, which includes efficient and stable QR and SVD for tall and skinny matrices. See Mariano’s paper on arXiv.
4. Finally I personally use dask on daily work related to the XData project. This tends to drive some of the newer features.
A few other groups pop up on github from time to time; I’d love to know more detail about how people use dask.
## What works and what doesn’t
Dask is modular. Each of the collections and each of the schedulers are effectively separate projects. These subprojects are at different states of development. Knowing the stability of each subproject can help you to determine how you use and depend on dask.
Dask.array and dask.threaded work well, are stable, and see constant use. They receive relatively minor bug reports which are dealt with swiftly.
Dask.bag and dask.multiprocessing undergo more API churn but are mostly ready for public use with a couple of caveats. Neither dask.dataframe nor
dask.distributed are ready for public use; they undergo significant API churn and have known errors.
## Current work
The current state of development as I see it is as follows:
1. Dask.bag and dask.dataframe are progressing nicely. My personal work depends on these modules, so they see a lot of attention.
• At the moment I focus on grouping and join operations through fast shuffles; I hope to write about this problem soon.
• The Pandas API is large and complex. Reimplementing a subset of it in a blocked way is straightforward but also detailed and time consuming. This would be a great place for community contributions.
2. Dask.distributed is new. It needs it tires kicked but it’s an exciting development.
• For deployment we’re planning to bootstrap off of IPython parallel which already has decent coverage of many parallel job systems, (see #208 by Blake)
3. Dask.array development these days focuses on outreach. We’ve found application domains where dask is very useful; we’d like to find more.
4. The collections (Array, Bag, DataFrame) don’t cover all cases. I would like to start finding uses for the task schedulers in isolation. They serve as a release valve in complex situations.
You can install dask with conda
conda install dask
or with pip
pip install dask
or
or
## May 18, 2015
### Titus Brown
#### Graph alignment and variant calling
There's an interesting and intuitive connection between error correction and variant calling - if you can do one well, it lets you do (parts of) the other well. In the previous blog post on some new features in khmer, we introduced our new "graphalign" functionality, that lets us align short sequences to De Bruijn graphs, and we discussed how we use it for error correction. Now, let's try it out for some simple variant calling!
Graphalign can potentially be used for variant calling in a few different ways - by mapping reads to the reference graph and then using a pileup approach, or by error correcting reads against the graph with a tunable threshold for errors and then looking to see where all the reads disagree - but I've become enamored of an approach based on the concept of reference-guided assembly.
The essential idea is to build a graph that contains the information in the reads, and then "assemble" a path through the graph using a reference sequence as a guide. This has the advantage of looking at the reads only once (to build a DBG, which can be done in a single pass), and also potentially being amenable to a variety of heuristics. (Like almost all variant calling, it is limited by the quality of the reference, although we think there are probably some ways around that.)
## Basic graph-based variant calling
Implementing this took a little bit of extra effort beyond the basic read aligner, because we want to align past gaps in the graph. The way we implemented this was to break the reference up into a bunch of local alignments, each aligned independently, then stitched together.
Again, we tried to keep the API simple. After creating a ReadAligner object,
aligner = khmer.ReadAligner(graph, trusted_cutoff, bits_theta)
there's a single function that takes in the graph and the sequence (potentially genome/chr sized) to align:
score, alignment = align_long(graph, aligner, sequence)
What is returned is a score and an alignment object that gives us access to the raw alignment, some basic stats, and "variant calling" functionality - essentially, reporting of where the alignments are not identical. This is pretty simple to implement:
for n, (a, b) in enumerate(zip(graph_alignment, read_alignment)):
if a != b:
yield n, a, b
The current implementation of the variant caller does nothing beyond reporting where an aligned sequence differs from the graph; this is kind of like guided assembly. In the future, the plan is to extend it with reference-free assembly.
To see this in action for a simulated data set, look at the file sim.align.out -- we get alignments like this, highlighting mismatches:
ATTTTGTAAGTGCTCTATCCGTTGTAGGAAGTGAAAGATGACGTTGCGGCCGTCGCTGTT
|||||||||||||||||||| |||||||||||||||||||||||||||||||||||||||
ATTTTGTAAGTGCTCTATCCCTTGTAGGAAGTGAAAGATGACGTTGCGGCCGTCGCTGTT
(Note that the full alignment shows there's a bug in the read aligner at the ends of graphs. :)
It works OK for whole-genome bacterial stuff, too. If we take an E. coli data set (the same one we used in the semi-streaming paper) and just run the reads against the known reference genome, we'll get 74 differences between the graph and the reference genome, out of 4639680 positions -- an identity of 99.998% (variants-ecoli.txt). On the one hand, this is not that great (consider that for something the size of the human genome, with this error rate we'd be seeing 50,000 false positives!); on the other hand, as with error correction, the whole analysis stack is surprisingly simple, and we haven't spent any time tuning it yet.
## Simulated variants, and targeted variant calling
With simulated variants in the E. coli genome, it does pretty well. Here, rather than changing up the genome and generating synthetic reads, we went with the same real reads as before, and instead changed the reference genome we are aligning to the reads. This was done with the patch-ecoli.py script, which changes an A to a C at position 500,000, removes two bases at position 2m, and adds two bases at position 3m.
When we align the "patched" E. coli genome against the read graph, we indeed recover all three alignments (see variants-patched.txt) in the background of the same false positives we saw in the unaltered genome. So that's kind of nice.
What's even neater is that we can do targeted variant calling directly against the graph -- suppose, for example, that we're interested in just a few regions of the reference. With the normal mapping-based variant calling, you need to map all the reads first before querying for variants by location, because mapping requires the use of the entire reference. Here, you are already looking at all the reads in the graph form, so you can query just the regions you're interested in.
So, for example, here you can align just the patched regions (in ecoli-patched-segments.fa) against the read graph and get the same answer you got when aligning the entire reference (target ecoli-patched-segments.align.out). This works in part because we're stitching together local alignments, so there are some caveats in cases where different overlapping query sequences might lead to different optimal alignments - further research needed.
## Speed considerations
Once you've created the graph (which is linear time with respect to the number of reads), things are pretty fast. For the E. coli data set, it takes about 25 seconds to do a full reference-to-graph alignment on my Mac laptop. Much of the code is still written in Python so we hope to get this under 5 seconds.
In the future, we expect to get much faster. Since the alignment is guided and piecewise, it should be capable of aligning through highly repetitive repeats and is also massively parallelizable. We think that the main bottleneck is going to be loading in the reads. We're working on optimizing the loading separately, but we're hoping to get down to about 8 hours for a full ~50x human genome variant calling with this method on a single CPU.
## Memory considerations
The memory is dominated by graph size, which in turn is dominated by the errors in short-read Illumina data. We have efficient ways of trimming some of these errors, and/or compressing down the data, even if we don't just correct them; the right approach will depend on details of the data (haploid? diploid? polyploid?) and will have to be studied.
For E. coli, we do the above variant calling in under 400 MB of RAM. We should be able to get that down to under 100 MB of RAM easily enough, but we will have to look into exactly what happens as we compress our graph down.
From the Minia paper, we can place some expectations on the memory usage for diploid human genome assembly. (We don't use cascading Bloom filters, but our approaches are approximately equivalent.) We believe we can get down to under 10 GB of RAM here.
As with most of our methods, this approach should work directly for variant calling on RNAseq and metagenomic data with little alteration. We have a variety of graph preparation methods (straight-up graph loading as well as digital normalization and abundance slicing) that can be applied to align to everything while favoring high-coverage reads, or only to high coverage, or to error-trimmed reads, or...
In effect, what we're doing is (rather boring) reference-guided assembly. Wouldn't it be nice if we extended it to longer indels, as in Holtgrewe et al., 2015? Yes, it would. Then we could ask for an assembly to be done between two points... This would enable the kinds of approaches that (e.g.) Rimmer et al., 2014 describe.
One big problem with this approach is that we're only returning positions in the reference where the graph has no agreement - this will cause problems when querying diploid data sets with a single reference, where we really want to know all variants, including heterozygous ones where the reference contains one of the two. We can think of several approaches to resolving this, but haven't implemented them yet.
A related drawback of this approach so far is that we have (so far) presented no way of representing multiple data sets in the same graph; this means that you can't align to many different data sets all at once. You also can't take advantage of things like the contiguity granted by long reads in many useful ways, nor can you do haplotyping with the long reads. Stay tuned...
## References and previous work
A number of people have done previous work on graph-based variant calling --
• Zam Iqbal and Mario Caccamo's Cortex is the first article that introduced me to this area. Since then, Zam's work as well as some of the work that Jared Simpson is doing on FM indices has been a source of inspiration.
(See especially Zam's very nice comment on our error correction post!)
• Heng Li's FermiKit does something very similar to what we're proposing to do, although it seems like he effectively does an assembly before calling variants. This has some positives and some negatives that we'll have to explore.
• Kimura and Koike (2015) do variant calling on a Burrows- Wheeler transform of short-read data, which is very similar to what we're doing.
• Using k-mers to find variation is nothing new. Two articles that caught my eye -- BreaKmer (Abo et al, 2015) and kSNP3 (Gardner et al., 2015) both do this to great effect.
• the GA4GH is working on graph-based variant calling, primarily for human. So far it seems like they are planning to rely on well curated genomes and variants; I'm going to be working with (much) poorer quality genomes, which may account for some differences in how we're thinking about things.
## Appendix: Running this code
The computational results in this blog post are Rather Reproducible (TM). Please see https://github.com/dib-lab/2015-khmer-wok2-vc/blob/master/README.rst for instructions on replicating the results on a virtual machine or using a Docker container.
## May 17, 2015
### Titus Brown
#### Read-to-graph alignment and error correction
One of the newer features in khmer that we're pretty excited about is the read-to-graph aligner, which gives us a way to align sequences to a De Bruijn graph; our nickname for it is "graphalign."
Briefly, graphalign uses a pair-HMM to align a sequence to a k-mer graph (aka De Bruijn graph) allowing both mismatches and indels, and taking into account coverage using a binary model (trusted and untrusted k-mers). The core code was written by Jordan Fish when he was a graduate student in the lab, based on ideas stemming from Jason Pell's thesis work on error correction. It was then refactored by Michael Crusoe.
Graphalign actually lets us do lots of things, including align both short and long sequences to DBG graphs, error correct, and call variants. We've got a simple Python API built into khmer, and we're working to extend it.
The core graphalign API is based around the concept of a ReadAligner object:
aligner = khmer.ReadAligner(graph, trusted_cov, bits_theta)
where 'graph' is a De Bruijn graph (implemented as a counting table in khmer), 'trusted_cov' defines what the trusted k-mer coverage is, and 'bits_theta' adjusts a scoring parameter used to extend alignments.
The 'aligner' object can be used to align short sequences to the graph:
score, graph_alignment, read_alignment, truncated = \
Here, 'graph_alignment' and 'read_alignment' are strings; if 'truncated' is false, then they are of the same length, and constitute a full gapped alignment of the DNA sequence in 'read' to the graph.
The approach used by 'align' is to seed an alignment at the first trusted k-mer, and then extend the alignment along the graph in both directions. Thus, it's effectively a local aligner.
## Error correction
Our initial motivation for graphalign was to use it to do error correction, with specific application to short-read sequences. There was (and to some extent still is) a dearth of error correction approaches that can be used for metagenome and transcriptome data sets, and since that kind of data is what our lab works on, we needed an error correction approach for those data. We also wanted something a bit more programmable than the existing error correctors, which were primarily command-line tools; we've found a lot of value in building libraries, and wanted to use that approach here, too.
The basic idea is this: we build a graph from our short-read data, and then go back through and align each short read to the graph. A successful alignment is then the corrected read. The basic code looks like this:
graph = build_graph(dataset)
In conjunction with our work on semi-streaming algorithms, we can directly convert this into a semi-streaming algorithm that works on genomes, metagenomes, and transcriptomes. This is implemented in the correct-reads script.
## Some results
If we try this out on a simulated data set (random genome, randomly chosen reads - see target compare-sim.txt in Makefile), it takes the simulated data from an error rate of around 1% to about 0.1%; see compare-sim.txt.
Applying this to a ~7m read subset of mRNAseq that we tackled in the semi-streaming paper (the data itself is from the Trinity paper, Grabherr et al, 2011), we take the data from an error rate of about 1.59% to 0.98% (see target rseq-compare.txt in Makefile). There are several reasons why this misses so many errors - first, error correction depends on high coverage, and much of this RNAseq data set is low coverage; second, this data set has a lot of errors; and third, RNAseq may have a broader k-mer abundance distribution than genomic sequencing.
One important side note: we use exactly the same script for error correcting RNAseq data as we do for genomic data.
## How good is the error correction?
tl; dr? It's pretty good but still worse than current methods. When we compare to Quake results on an E. coli data set (target compare-ecoli.txt in the Makefile), we see:
Data set Error rate
Uncorrected 1.587%
Quake 0.009%
khmer 0.013%
This isn't too bad - two orders of magnitude decrease in error rate! - but we'd like to at least be able to beat Quake :).
(Note that here we do a fair comparison by looking only at errors on sequences that Quake doesn't discard; to get comparable results on your data with khmer, you'd also have to trim your reads. We are also making use of the approach developed in the streaming paper where we digitally normalize the graph in advance, in order to decrease the number of errors and the size of the graph.)
## Concluding thoughts
What attracts us to this approach is that it's really simple. The basic error correction is a few lines, although it's surrounded by a bunch of machinery for doing semi-streaming analysis and keeping pairing intact. (The two-pass/offline script for error correction is much cleaner, because it omits all of this machinery.)
It's also nice that this applies to all shotgun sequencing, not just genomic; that's a trivial extension of our semi-streaming paper.
We also suspect that this approach is quite tunable, although we are just beginning to investigate the proper way to build parameters for the pair-HMM, and we haven't nailed down the right coverage/cutoff parameters for error correction either. More work to be done!
In any case, there's also more than error correction to be done with the graphalign approach -- stay tuned!
## References and previous work
This is by no means novel - we're building on a lot of ideas from a lot of people. Our interest is in bridging from theory to practice, and providing a decent tunable implementation in an open-source package, so that we can explore these ideas more widely.
Here is short summary of previous work, surely incomplete --
• Much of this was proximally inspired by Jordan's work on Xander, software to do HMM-guided gene assembly from metagenomic data. (An accompanying paper has been accepted for publication; will blog about that when it hits.)
• More generally, my MSU colleague Yanni Sun has had several PhD students that have worked on HMMs and graph alignment, and she and her students have been great sources of ideas! (She co-advised Jordan.)
• BlastGraph, like Xander, built on the idea of graph alignment. It is the earliest reference I know of to graph alignment, but I haven't looked very hard.
• Yuzhen Ye and Haixu Tang at Indiana have developed very similar functionality that I became aware of when reviewing their nice paper on graph alignment for metatranscriptomics.
• Jared Simpson has been doing nice work on aligning Nanopore reads to a reference sequence. My guess is that the multiple sequence alignment approach described in Jonathan Dursi's blog post is going to prove relevant to us.
• The error corrector Coral (Salmela and Schroder, 2011) bears a strong philosophical resemblance to graphalign in its approach to error correction, if you think of a De Bruijn graph as a kind of multiple-sequence alignment.
## Appendix: Running this code
The computational results in this blog post are Rather Reproducible (TM). Please see https://github.com/dib-lab/2015-khmer-wok1-ec/blob/master/README.rst for instructions on replicating the results on a virtual machine or using a Docker container.
### Gaël Varoquaux
#### Software for reproducible science: let’s not have a misunderstanding
Note
tl;dr: Reproducibilty is a noble cause and scientific software a promising vessel. But excess of reproducibility can be at odds with the housekeeping required for good software engineering. Code that “just works” should not be taken for granted.
This post advocates for a progressive consolidation effort of scientific code, rather than putting too high a bar on code release.
Titus Brown recently shared an interesting war story in which a reviewer refuses to review a paper until he can run the code on his own files. Titus’s comment boils down to:
“Please destroy this software after publication”.
Note
Reproducible science: Does the emperor have clothes?
In other words, code for a publication is often not reusable. This point of view is very interesting from someone like Titus, who is a vocal proponent of reproducible science. His words triggered some surprises, which led Titus to wonder if some of the reproducible science crowd folks live in a bubble. I was happy to see the discussion unroll, as I think that there is a strong risk of creating a bubble around reproducible science. Such a bubble will backfire.
## Replication is a must for science and society
Science advances by accumulating knowledge built upon observations. It’s easy to forget that these observations, and the corresponding paradigmatic conclusions, are not always as simple to establish as the fact that hot air rises: replicating many times the scientific process transforms an evidence into a truth.
One striking example of scientific replication is the on-going effort in psychology to replay the evidence behind well-accepted findings central to current line of thoughts in psychological sciences. It implies setting up the experiments accordingly to the seminal publications, acquiring the data, and processing it to come up to the same conclusions. Surprisingly, not everything that was taken for granted holds.
Note
Findings later discredited backed economic policy
Another example, with massive consequences on Joe Average’s everyday, is the failed replication of Reinhart and Rogoff’s “Growth in a Time of Debt” publication. The original paper, published in 2010 in the American Economic Review, claimed empirical findings linking important public debt to failure of GDP growth. In a context of economical crisis, it was used by policy makers as a justification for restricted public spending. However, while pursuing a mere homework assignment to replicate these findings, a student uncovered methodological flaws with the paper. Understanding the limitations of the original study took a while, and discredited the academic backing to the economical doctrine of austerity. Critically, the analysis of the publication was possible only because Reinhart and Rogoff released their spreadsheet, with data and analysis details.
## Sharing code can make science reproducible
A great example of sharing code to make a publication reproducible is the recent paper on orthogonalization of regressors in fMRI models, by Mumford, Poline and Poldrack. The paper is a didactic refutation of non-justified data processing practices. The authors made their point much stronger by giving an IPython notebook to reproduce their figures. The recipe works perfectly here, because the ideas underlying the publication are simple and can be illustrated on synthetic data with relatively inexpensive computation. A short IPython notebook is all it takes to convince the reader.
Note
Sharing complex code… chances are it won’t run on new data.
At the other end of the spectrum, a complex analysis pipeline will not be as easy to share. For instance, a feat of strength such as Miyawaki et al’s visual image reconstruction from brain activity requires complex statistical signal processing to extract weak signatures. Miyawaki et al shared the data. They might share the code, but it would be a large chunk of code, probably fragile to changes in the environment (Matlab version, OS…). Chances are that it wouldn’t run on new data. This is the scenario that prompted Titus’s words:
“Please destroy this software after publication”.
I have good news: you can reproduce Miyawaki’s work with an example in nilearn, a library for machine learning on brain images. The example itself is concise, readable and it reliably produces figures close to that of the paper.
Note
Maintained libraries make feats of strength routinely reproducible.
This easy replication is only possible because the corresponding code leverages a set of libraries that encapsulate the main steps of the analysis, mainly scikit-learn and nilearn here. These libraries are tested, maintained and released. They enable us to go from a feat of strength to routine replication.
## Reproducibility is not sustainable for everything
Thinking is easy, acting is difficult — Goethe
Note
Keeping a physics apparatus running for replication years later?
I started my scientific career doing physics, and fairly “heavy” physics: vacuum systems, lasers, free-falling airplanes. In such settings, the cost of maintaining an experiment is apparent to the layman. No-one is expected to keep an apparatus running for replication years later. The pinnacle of reproducible research is when the work becomes doable in a students lab. Such progress is often supported by improved technology, driven by wider applications of the findings.
However, not every experiment will give rise to a students lab. Replicating the others will not be easy. Even if the instruments are still around the lab, they will require setting up, adjusting and wiring. And chances are that connectors or cables will be missing.
Software is no different. Storing and sharing it is cheaper. But technology evolves very fast. Every setup is different. Code for a scientific paper has seldom been built for easy maintenance: lack of tests, profusion of exotic dependencies, inexistent documentation. Robustness, portability, isolation, would be desirable, but it is difficult and costly.
Software developers know that understanding the constraints to design a good program requires writing a prototype. Code for a scientific paper is very much a prototype: it’s a first version of an idea, that proves its feasibility. Common sense in software engineering says that prototypes are designed to be thrown away. Prototype code is fragile. It’s untested, probably buggy for certain usage. Releasing prototypes amounts to distributing semi-functioning code. This is the case for most code accompanying a publication, and it is to be expected given the very nature of research: exploration and prototyping [1].
## No success without quality, …
Note
Highly-reliable is more useful than state-of-the-art.
My experience with scientific code has taught me that success require quality. Having a good implementation of simple, well-known, methods seems to matter more than doing something fancy. This is what the success of scikit-learn has taught us: we are really providing classic “old” machine learning methods, but with a good API, good docs, computational performance, and stable numerics controlled by stringent tests. There exists plenty of more sophisticated machine-learning methods, including some that I have developed specifically for my data. Yet, I find myself advising my co-workers to use the methods in scikit-learn, because I know that the implementation is reliable and that they will be able to use them [2].
This quality is indeed central to doing science with code. What good is a data analysis pipeline if it crashes when I fiddle with the data? How can I draw conclusions from simulations if I cannot change their parameters? As soon as I need trust in code supporting a scientific finding, I find myself tinkering with its input, and often breaking it. Good scientific code is code that can be reused, that can lead to large-scale experiments validating its underlying assumptions.
Sqlite is so much used that its developers have been woken up at night by users.
You might say that I am putting the bar too high; that slightly buggy code is more useful than no code. But I frown at the idea of releasing code for which I am unable to do proper quality assurance. I may have done too much of that in the past. And because I am a prolific coder, many people are using code that has been through my hands. My mailbox looks like a battlefield, and when I go the coffee machine I find myself answering questions.
## … and making difficult choices
Note
Achieving quality requires making choices. Not only because time is limited, but also because the difficulty to maintain and improve a codebase increases much quicker than the numbers of features [3]. This phenomena is actually frightening to watch: adding a feature in scikit-learn these days is much much harder than what it used to be in the early days. Interactions between features is a killer: when you modify something, something else unrelated breaks. For a given functionality, nothing makes the code more incomprehensible than cyclomatic complexity: the multiplicity of branching, if/then clauses, for loops. This complexity naturally appears when supporting different input types, or minor variants of a same method.
The consequence is that ensuring quality for many variants of a method is prohibitory. This limit is a real problem for reproducible science, as science builds upon comparing and opposing models. However, ignoring it simply leads to code that fails doing what it claims to do. What this is telling us, is that if we are really trying to do long-term reproducibility, we need to identify successful and important research and focus our efforts on it.
If you agree with my earlier point that the code of a publication is a prototype, this iterative process seems natural. Various ideas can be thought of as competing prototypes. Some will not lead to publication at all, while others will end up having a high impact. Knowing before-hand is impossible. Focusing too early on achieving high quality is counter productive. What matters is progressively consolidating the code.
## Reproducible science, a rich trade-off space
Note
Verbatim replication or reuse?
Does Reinhart and Rogoff’s “Growth in a Time of Debt” paper face the same challenges as the manuscript under review by Titus? One is describing mechanisms while the other is introducing a method. The code of the former is probably much simpler than that of the latter. Different publications come with different goals and code that is more or less easy to share. For verbatim replication of the analysis of a paper, a simple IPython notebook without tests or API is enough. To go beyond requires applying the analysis to different problems or data: reuse. Reuse is very difficult and cannot be a requirement for all publications.
Conventional wisdom in academia is that science builds upon ideas and concepts rather than methods and code. Galileo is known for his contribution to our understanding of the cosmos. Yet, methods development underpins science. Galileo is also the inventor of the telescope, which was a huge technical achievement. He needed to develop it to back his cosmological theories. Today, Galileo’s measurements are easy to reproduce because telescopes are readily-available as consumer products.
Standing on the shoulders of giants — Isaac Newton, on software libraries
[1] To make my point very clear, releasing buggy untested code is not a good thing. However, it is not possible to ask for all research papers to come with industial-quality code. I am trying here to push for a collective, reasoned, undertaking of consolidation.
[2] Theory tells us that there is there is no universal machine learning algorithm. Given a specific machine-learning application, it is always possible to devise a custom strategy that out-performs a generic one. However, do we need hundreds of classifiers to solve real world classification problems? Empirical results [Delgado 2014] show that most of the benefits can be achieved with a small number of strategies. Is it desirable and sustainable to distribute and keep alive the code of every machine learning paper?
[3] Empirical studies on the workload for programmers to achieve a given task showed that 25 percent increase in problem complexity results in a 100 percent increase in programming complexity: An Experiment on Unit increase in Problem Complexity, Woodfield 1979.
I need to thank my colleague Chris Filo Gorgolewski and my sister Nelle Varoquaux for their feedback on this note.
## May 13, 2015
### Titus Brown
#### Adventures in replicable scientific papers: Docker
About a month ago, I took some time to try out Docker, a container technology that lets you bundle together, distribute, and execute applications in a lightweight Linux container. It seemed neat but I didn't apply it to any real problems. (Heng Li also tried it out, and came to some interesting conclusions -- note especially the packaging discussion in the comments.)
At the sprint, I decided to try building a software container for our latest paper submission on semi-streaming algorithms for DNA sequence analysis, but I got interrupted by other things. Part of the problem was that I had a tough time conceptualizing exactly what my use case for Docker was. There are a lot of people starting to use Docker in science, but so far only nucleotid.es has really demonstrated its utility.
Fast forward to yesterday, when I talked with Michael Crusoe about various ideas. We settled on using Docker to bundle together the software needed to run the full paper pipeline for the streaming paper. The paper was already highly replicable because we had used my lab's standard approach to replication (first executed three years ago!) This wasn't a terribly ambitious use of Docker but seemed like it could be useful.
In the end, it turned out to be super easy! I installed Docker on an AWS m3.xlarge, create a Dockerfile, and wrote up some instructions.
The basic idea we implemented is this:
• install all the software in a Docker container (only needs to be done once, of course);
• clone the repository on the host machine;
• copy the raw data into the pipeline/ sub-directory of the paper repository;
• run the docker container with the root of the paper repository (on the host, wherever it might be) bound to a standard location ('/paper') in the image;
• voila, raw data in, analyzed results out!
(The whole thing takes about 15 hours to run.)
## The value proposition of Docker for data-intensive papers
So what are my conclusions?
I get the sense that this is not really the way people are thinking about using Docker in science. Most of what I've seen has to do with workflows, and I get the sense that the remaining people are trying to avoid issues with software packaging. In this case, it simply didn't make sense to me to break our workflow steps for this paper out into different Docker images, since our workflow only depends on a few pieces of software that all work together well. (I could have broken out one bit of software, the Quake/Jellyfish code, but that was really it.)
I'm not sure how to think about the volume binding, either - I'm binding a path on the Docker container directly to a local disk, so the container isn't self-sufficient. The alternative was to package the data in the container, but in this case, it's 15-20 GB, which seemed like too much! This dependence on external data does limit our ability to deploy the container to compute farms though, and it also means that we can't put the container on the Docker hub.
The main value that I see for this container is in not polluting my work environment on machines where I can run Docker. (Sadly this does not yet include our HPC at MSU.) I could also use a Project Jupyter container to build our figures, and perhaps use a separate Latex container to build the paper... overkill? :).
One nice outcome of the volume binding is that I can work on the Makefile and workflow outside of the docker container, run it all inside the container, and then examine the artifacts outside of the container. (Is there a more standard way to do this?)
I also really like the explicit documentation of the install and execution steps. That's super cool and probably the most important bit for paper replication. The scientific world would definitely be a better place if the computational setup for data analysis and modeling components of papers came in a Dockerfile-style format! "Here's the software you need, and the command to run; put the data here and push the 'go' button!"
I certainly see the value of docker for running many different software packages, like nucleotid.es does. I think we should re-tool our k-mer counting benchmark paper to use containers to run each k-mer counting package benchmark. In fact, that may be my next demo, unless I get sidetracked by my job :).
## Next steps
I'm really intrigued by two medium-term directions -- one is the bioboxes-style approach for connecting different Docker containers into a workflow, and the other is the nucleotid.es approach for benchmarking software. If this benchmarking can be combined with github repos ("go benchmark the software in this github project!") then that might enable continuously running testing and benchmarks on a wide range of software.
Longer term, I'd like to have a virtual computing environment in which I can use my Project Jupyter notebook running in a Docker environment to quickly and easily spin up a data-intensive workflow involving N docker containers running on M machines with data flowing through them like so. I can already do this with AWS but it's a bit clunky; I foresee a much lighter-weight future for ultra-configurable computing.
In the shorter term, I'm hoping we can put some expectations in place for what dockerized paper replication pipelines might look like. (Hint: binary blobs should not be acceptable!) If we have big data sets, we probably don't want to put them on the Docker Hub; is the right solution to combine use of a data repository (e.g. figshare) with a docker container (to run all the software) and a tag in a github repository (for the paper pipeline/workflow)?
Now, off to review that paper that comes with a Docker container... :)
--titus
#### Modifications to our development process
After a fair amount of time thinking about software's place in science (see blog posts 1, 2, 3, and 4), and thinking about khmer's short- and long-term future, we're making some changes to our development process.
Semantic versioning: The first change, and most visible one, is that we are going to start bumping version numbers a lot faster. One of the first things Michael Crusoe put in place was semantic versioning, which places certain compatibility guarantees on version numbers used. These compatibility guarantees (on the command line API only, for khmer) are starting to hold us back from sanding down the corners. Moving forward, we're going to bump version numbers as quickly as needed for the code we've merged, rather than holding off on cleanup.
Michael just released khmer v1.4; my guess is that 2.0 will follow soon after. We'll try to batch major versions a little bit, but when in doubt we'll push forward rather than holding back, I think. We'll see how it goes.
Improving the command-line user experience. At the same time, we're going to be focusing more on user experience issues; see #988 for an example. Tamer Mansour, one of my new postdocs at Davis, took a fresh look at the command line and argued strenuously for a number of changes, and this aligns pretty well with our interests.
Giving more people explicit merge authority. 'til now, it was mostly Michael and myself doing merges; we've asked Luiz Irber and Camille Scott to step up and do not only code review but merges on their own recognizance. This should free up Michael to focus more on coding, as well as speeding up response times when Michael and I are both busy or traveling. I'm also asking mergers to fix minor formatting issues and update the ChangeLog for pull requests that are otherwise good - this will accelerate the pace of change and decrease frustration around quick fixes.
This is part of my long-term plan to involve more of the lab in software engineering. Most experimental labs have lab duties for grad students and postdocs; I'd like to try out the model where the grad students and postdocs have software engineering duties, independent of their research.
Deferring long-term plans and deprecating sprint/training efforts. We will defer our roadmap and decrease our sprint and training interactions. As a small project trying to get more funding, we can't afford the diversion of energy at this point. That having been said, both the roadmap planning and the sprints thus far were tremendously valuable for thinking ahead and making our contribution process more robust, and we hope to pursue both in the future.
Paying technical debt maintenance fees, instead of decreasing debt. We still have lots of issues that are burdening the codebase, especially at the Python and C++ interface levels, but we're going to ignore them for now and focus instead on adding new features (hopefully without increasing technical debt, note - we're keeping the code review and continuous integration and test coverage and ...). Again, we're a small project trying to get more funding... hard choices must be made.
I'm writing a grant now to ask for sustained funding on a ~5 year time scale, for about 3 employees - probably a software engineer / community manager, a super-postdoc/software engineer, and a grad student. If we can get another round of funding, we will reactivate the roadmap and think about how best to tackle technical debt.
--titus
p.s. Special thanks to Ethan White, Greg Wilson, and Neil Chue Hong for their input!
## May 08, 2015
### Titus Brown
#### My review of a review of "Influential Works in Data Driven Discovery"
I finally got a chance to more thoroughly read Mark Stalzer and Chris Mentzel's arxiv preprint, "A Preliminary Review of Influential Works in Data-Driven Discovery". This is a short review paper that discusses concepts highlighted by the 1,000+ "influential works" lists submitted to the Moore Foundation's Data Driven Discovery (DDD) Investigator Competition. (Note, I was one of the awardees.)
The core of this arxiv preprint is the section on "Clusters of Influential Works", in which Stalzer & Mentzel go in detail through the eight different concept clusters that emerged from their analysis of the submissions. This is a fascinating section that should be at the top of everyone's reading list. The topics covered are, in the order presented in the paper, as follows:
• Foundational theory, including Bayes' Theorem, information theory, and Metropolis sampling;
• Astronomy, and specifically the Sloan Digital Sky Survey;
• Genomics, focused around the Human Genome Project and methods for searching and analyzing sequencing data;
• Classical statistical methods, including the lasso, bootstrap methods, boosting, expectation-maximization, random forests, false discovery rate, and "isomap" (which I'd never heard of!);
• Machine learning, including Support Vector Machines, artificial Neural Networks (and presumably deep learning?), logistic belief networks, and hidden Markov models;
• The Google! Including PageRank, MapReduce, and "the overall anatomy" of how Google does things; specific implementations included Hadoop, BigTable, and Cloud DataFlow.
• General tools, programming languages, and computational methods, including Numerical Recipes, the R language, the IPython Notebook (Project Jupyter), the Visual Display of Quantitative Information, and SQL databases;
• Centrality of the Scientific Method (as opposed to specific tools or concepts). Here the discussion focused around the Fourth Paradigm book which lays out the expansion of the scientific method from empirical observation to theory to simulation to "big data science"; here, I thought the point that computers were used for both theory and observation was well-made. This section is particularly worth reading, in my opinion.
This collection of concepts is simply delightful - Stalzer and Mentzel provide both a summary of the concepts and a fantastic curated set of high-level references.
Since I don't know many of these areas that well (I've heard of most of the subtopics, but I'm certainly not expert in ... any of them? yikes) I evaluated the depth of their discussion by looking at the areas I was most familiar with - genomics and tools/languages/methods. My sense from this was that they covered the highlights of tools better than the highlights of genomics, but this may well be because genomics is a much larger and broader field at the moment.
## Data-Driven Discovery vs Data Science
One interesting question that comes up frequently is what the connection and overlap is between data-driven discovery, data science, big data, data analysis, computational science, etc. This paper provides a lot of food for thought and helps me draw some distinctions. For example, it's clear that computational science includes or at least overlaps with all of the concepts above, but computational science also includes things like modeling that I don't think clearly fit with the "data-driven discovery" theme. Similarly, in my experience "data science" encompasses tools and methods, along with intelligent application of them to specific problems, but practically speaking does not often integrate with theory and prediction. Likewise, "big data", in the sense of methods and approaches designed to scale to analysis and integration of large data set, is clearly one important aspect of data-driven discovery - but only in the sense that in many cases more data seems to be better.
Ever since the "cage match" round of the Moore DDD competition, where we discussed these issues in breakout groups, I've been working towards the internal conclusion that data-driven discovery is the exploration and acceleration of science through development of new data science theory, methods, and tools. This paper certainly helps nail that down by summarizing the components of "data driven discovery" in the eyes of its practitioners.
## Is this a framework for a class or graduate training theme?
I think a lot about research training, in several forms. I do a lot of short-course peer instruction form (e.g. Data Carpentry, Software Carpentry, and my DIB efforts); I've been talking with people about graduate courses and graduate curricula, with especial emphasis on data science (e.g. the Data Science Initiative at UC Davis); and, most generally, I'm interested in "what should graduate students know if they want to work in data-driven discovery"?
From the training perspective, this paper lays out the central concepts that could be touched on either in a survey course or in an entire graduate program; while my sense is that a PhD would require coupling to a specific domain, I could certainly imagine a Master's program or a dual degree program that touched on the theory and practice of data driven discovery.
For one example, I would love to run a survey course on these topics, perhaps in the area of biology. Such a course could go through each of the subsections above, and discuss them in relation to biology - for example, how Bayes' Theorem is used in medicine, or how concepts from the Sloan Digital Sky Survey could be applied to genomics, or where Google-style infrastructure could be used to support research.
There's more than enough meat in there to have a whole graduate program, though. One or two courses could integrate theory and tools, another course could focus on practical application in a specific domain, a third course could talk about general practice and computing tools, and a fourth course could discuss infrastructure and scaling.
## The missing bits - "open science" and "training"
Something that I think was missing from the paper was an in-depth perspective on the role that open source, open data, and open science can play. While these concepts were directly touched on in a few of the subsections - most of the tools described were open source, for example, and Michael Nielsen's excellent book "Reinventing Discovery" was mentioned briefly in the context of network effects in scientific communication and access - I felt that "open science" was an unacknowledged undercurrent throughout.
It's clear that progress in science has always relied on sharing ideas, concepts, methods, theory, and data. What I think is not yet as clear to many is the extent to which practical, efficient, and widely available implementations of methods have become important in the computer age. And, for data-driven discovery, an increasingly critical aspect is the infrastructure to support data sharing, collaboration, and the application of these methods to large data sets. These two themes -- sharing of implementation and importance of infrastructure cut across many of the subsections in this paper, including the specific domains of astronomy and human genomics, as well as the Google infrastructure and languages/tools/implementation subsections. I think the paper could usefully add a section on this.
Interestingly, the Moore Foundation DDD competition implicitly acknowledged this importance by enriching for open scientists in their selection of the awardees -- a surprising fraction of the Investigators are active in open science, including myself and Ethan White, and virtually all the Investigators are openly distributing their research methodology. In that sense, open science is a notable omission from the paper.
It's also interesting to note that training is missing from the paper. If you believe data-driven discovery is part of the future of science, then training is important because there's a general lack of researchers and institutions that cover these topics. I'd guess that virtually no one researcher is well versed in a majority of the topics, especially since many of the topics are entire scientific super-fields, and the rest are vast technical domains. In academic research we're kind of used to the idea that we have to work in collaboration (practice may be different...), but here academia really fails to cover the entire data-driven discovery spectrum because of the general lack of emphasis on expert use of tools and infrastructure in universities.
So I think that investment in training is where the opportunities lie for universities that want to lead in data-driven discovery, and this is the main chance for funders that want to enable the network effect.
## Training in open science, tools, and infrastructure as competitive advantages
Forward-thinking universities who are in it for the long game & interested in building a reputation in data-driven discovery, might consider the following ideas:
• scientists trained in open science, tool use, and how to use existing infrastructure, are more likely to be able to quickly take advantages of new data and methods.
• scientists trained in open science are more likely to produce results that can be built on.
• scientists trained in open science are more likely to produce useful data sets.
• scientists trained in open science and tool building are more likely to produce useful tools.
• funding agencies are increasingly interested in maximizing impact by requiring open source, open data, and open access.
All of these should lead to more publications, more important publications, a better reputation, and more funding.
In sum, I think investments in training in the most ignored bits of data-driven discovery (open science, computational tool use and development, and scalable infrastructure use and development) should be a competitive advantage for institutions. And, like most competitive advantages, those who ignore it will be at a significant disadvantage. This is also an opportunity for foundations to drive progress by targeted investments, although (since they are much more nimble than universities) they are already doing this to some extent.
In the end, what I like most about this paper is that it outlines and summarizes the concepts in which we need to invest in order to advance science through data-driven discovery. I think it's an important contribution and I look forward to its further development and ultimate publication!
--titus
## May 07, 2015
### Continuum Analytics
#### Continuum Analytics - May Tech Events
The Continuum team is gearing up for a summer full of conferences, including PyData Seattle, taking place July 24-26, hosted by Microsoft. But first we’ve got a few May conferences to keep an eye out for, all over the globe! Join us in Austin, Argentina, Berlin, and Boston this month.
## May 06, 2015
### Abraham Escalante
#### My GSoC experience
Hello all,
My name is Abraham Escalante and I'm a mexican software engineer. The purpose of this blog is to relate my experiences and motivations to participate in the 2015 Google Summer of Code.
I am not much of a blogger (in fact, this is my first blog entry ever) but if you got here, then chances are you are interested in either the GSoC, the personal experience of a GSoCer or maybe we have a relationship of some sort and you have a personal interest (I'm looking at you Hélène). Either way, I will do my best to walk you through my experience with the hope that this may turn out to be useful for someone in the future, be it to help you get into the GSoC programme or just to get to know me a little better if you find that interesting enough.
I have some catching up to do because this journey started for me several months ago. The list of selected student proposals has already been published (**spoiler alert** I got selected) and the coding period will start in about three weeks time but for now I just wanted to write a first entry to get the ball rolling and so you get an idea of what you can expect, should you choose to continue reading these blog entries. I will begin my storytelling soon.
Cheers,
Abraham.
## May 05, 2015
### Titus Brown
#### A workshop report from the May 2015 non-model RNAseq workshop at UC Davis
We just finished teaching the second of my RNAseq workshops at UC Davis -- the fifth workshop I've hosted since I took a faculty position here in VetMed. In order, we've done a Train the Trainers, a Data Carpentry, a reference-guided RNAseq assembly workshop, a mothur (microbial ecology) workshop, and a de novo RNAseq assembly workshop -- you can see all of the links at the Data Intensive Biology Training Program Web site. This workshop was the May de novo mRNAseq assembly workshop, which I co-taught with Tamer Mansour and Camille Scott.
The workshops are still maturing, and I'm trying to figure out how to keep this going for the medium term, but so far I think we're doing an OK job. We can always improve the material and the delivery, but I think at least we're on a good trajectory.
This workshop (and the many excellent questions raised by the attendees) reminded me how much of RNAseq analysis is still research -- it's not just a question of what assembler and quantification method to use, but much more fundamental questions of data evaluation, assembly evaluation, and how to tie this into the biology you're trying to do. My lab works on this a lot, and too much of the time we have to say "we just don't know" - often because the experts don't agree, or because the answer is just unknown.
I also despair sometimes that the energy and effort we're putting into this isn't enough. There is a huge demand, and these two day workshops are at best a stopgap measure, and I really have no idea whether they're going to help biologists starting from scratch to analyze their own data.
I do have other arrows in my quiver. Once my lab "lands" at Davis (sometime between June and September) I expect to start up a biology "data space" of some sort, where every week people who have been through one of my workshops can come work on their data analysis; the hope is that, much like the Davis R Users Group, we can start to build a community around biological data analysis. Stay tuned.
I'm also planning to start running more advanced workshops. One great idea that Tamer pitched to me this morning was to run a follow-on workshop entitled "publishing your transcriptome", which would focus on quality measures and analysis downstream of your first-blush transcriptome assembly/annotation/quantification. I'm also hoping to put together an "automation and reproducibility" workshop in the fall, along with a variety of more focused workshops on specific platforms and questions.
And, of course, we'll continue running the intro workshops. In addition to the mRNASeq workshops, in the fall I'd like to do workshops on microbial genome assembly and annotation, metagenome and metatranscriptome assembly, and advanced UCSC genome browser use/misuse (think assembly hubs etc.).
--titus
### Juan Nunez-Iglesias
#### jnuneziglesias
I use Twitter favourites almost exclusively to mark posts that I know will be useful in some not-too-distant future; kind of like a Twitter Evernote. Recently I was looking through my list in search of this excellent blog post detailing how to build cross-platform binary distributions for conda.
I came across two other tweets from the EuroSciPy 2014 conference: this one by Ian Ozsvald about his IPython memory usage profiler, right next to this one by Alexandre Chabot about Aaron O’Leary’s notedown. I’d forgotten that this was how I came across these two tools, but since then I have contributed code to both (1, 2). I’d met Ian at EuroSciPy 2013, but I’ve never met Aaron, yet nevertheless there is my code in the latest version of his notedown library.
How remarkable the open-source Python community has become. Talks from Python conferences are posted to YouTube, usually as the conference is happening. (Add to that plenty of live tweeting.) Thus, even when I can’t attend the conferences, I can keep up with the latest open source libraries, from the other side of the world. And then I can grab the source code on GitHub, fiddle with it to my heart’s content, and submit a pull request to the author with my changes. After a short while, code that I wrote for my own utility is available for anyone else to use through PyPI or conda.
My point is: join us! Make your code open source, and conversely, when you need some functionality, don’t reinvent the wheel. See if there’s a library that almost meets your needs, and contribute!
## April 28, 2015
### Matthieu Brucher
#### Book review: scikit-learn Cookbook
There are now a few books on sickit-learn, for instance a general one on machine learning systems, and a cookbook. I was a technical reviewer for the first one, and now I’m reviewing the cookbook.
#### Content and opinions
A cookbook is a collection of recipes, it is not intended to help you understand how your oven works. It is the same for this book, it won’t help you install your oven or set it up, you will have to know how to install the required packages.
It will help you decide what tool to use for which problem. It is complementary to the tutorials and the gallery on the scikit website as it adds some thoughts on what the algorithm does and where to pay attention. If Building Machine Learning Systems in Python is quite broad and goes from the installation to specific algorithms, this book tries to cover more algorithms, with explanations of what you are doing, but with less depth, and it is more or less only focused on scikit-learn.
#### Conclusion
If you know a little bit about machine learning and Python, a cookbook may be more appropriate than a more “vertical” book. As such this book covers quite a bit of the scikit, with some useful tips. But as it doesn’t go in too many details, you still need to confront data and parameters against a book like Bishop’s Pattern Recognition and Machine Learning.
## April 26, 2015
### Titus Brown
#### Proposal: Integrating the OSF into Galaxy as a remote data store
Note - this was an internal funding request solicited by the Center for Open Science. It's been funded!
Brief: We propose to integrate OSF into Galaxy as a data store. For this purpose, we request 3 months of funding (6 months, half-time) for one developer, plus travel.
Introduction and summary: Galaxy is a commonly used open source biomedical/biological sequence data analysis platform that enables biologists to put together reproducible pipelines and execute analyses locally or in the cloud. Galaxy has a robust and sophisticated Web-based user interface for setting up these pipelines and analyzing data. One particular challenge for Galaxy is that on cloud instances, data storage and publication must be done using local filesystems and remote URLs, which adds a significant amount of complexity for biologists interested in doing reproducible computing. Recently, Galaxy gained a data abstraction layer that permits object stores to be used instead of local filesystems. The Center for Open Science’s Open Science Framework (OSF), in turn, is a robust platform for storing, manipulating, and sharing scientific data, and provides APIs for accessing such data; the OSF can also act as a broker for accessing and managing remote data stores, on e.g. cloud providers. Integrating the OSF’s object store into Galaxy would let Galaxy use OSF for data persistence and reproducibility, and would let Galaxy users take advantage of OSF’s data management interface, APIs, and authentication to expand their reproducible biomedical science workflows. This integration would also rigorously test and exercise newly developed functionality in both Galaxy and the OSF, providing valuable use cases and testing.
Our “stretch” goal would be to expand beyond Galaxy and work with Project Jupyter/IPython Notebook’s data abstraction layer to provide an OSF integration for Project Jupyter.
We note with enthusiasm that all groups mentioned here are robust participants in the open source/open science ecosystem, and all projects are full open source projects with contributor guidelines and collaboration workflows!
Broader impacts: If successful, the proposed project addresses several broader issues. First, the OSF would have an external consumer of its APIs for data access, which would drive the maturation of these APIs with use cases. Second, the OSF would expand to support connections with a visible project in a non-psychology domain, giving COS a proof-of-concept demonstration for expansion into new communities. Third, the Galaxy biomedical community would gain connections to the OSF’s functionality, which would help in execution, storage, and publication of biomedical data analyses. Fourth, the Brown Lab would then be able to explore further work to build their Moore-DDD-funded data analysis portal on top of both Galaxy and the OSF, leveraging the functionality of both projects to advance open science and reproducibility. Even a partial failure would be informative by exposing faults in the OSF or Galaxy public APIs and execution models, which could then be addressed by the projects individually. This project would also serve as a “beta test” of the COS as an incubator of open science software projects.
Longer-term outcomes: the Brown Lab and the COS are both interested in exploring the OSF as a larger hub for data storage for workflow execution, teaching and training in data-intensive science, and hosting the reproducible publications. This proposed project is a first step in those directions.
#### Popping the open source/open science bubble.
One of the things that became clear to me over the last two weeks is just how much of a open source/open science bubble my blog and Twitter commenters live in. Don't take that as a negative -- I'm in here with you, and it's a great place to live :). But it's still a bubble.
Two specific points brought this home to me.
First, a lot of the Twitter and blog commentary on Please destroy this software after publication. kthxbye. expressed shock and dismay that I would be OK with non-OSS software being published. (Read Mick Watson's blog post and Kai Blin's comment.) Many really good reasons why I was wrong were brought up, and, well, I have to say it was terrifically convincing and I'm going to change my own policy as a reviewer. So far, so good. But it turns out that only a few journals require an actual open source license (Journal of Open Research Software and Journal of Statistical Software). So there is a massive disparity between what some of my tweeps (and now me) believe, and what is codified practice.
Second, many eloquent points were made about software as a major product and enabler of research -- see especially the comments on "software as communication" and "software as experimental design" by others (linked to here - see "Software as..." section). These points were very convincing as well, although I'm still trying to figure out how exactly to evolve my own views. And yet here again I think we can be quite clear that most biologists and perhaps even some bioinformaticians would have either no considered opinion on software, or be outright dismissive of the idea that software itself is intellectual output. Again, very different from what the people on Twitter and my blog think.
I was already pretty surprised with how strong the case was for open source software as a requirement (go read the links above). I was even more surprised with how eloquently and expansively people defended the role of software in research. Many, many strong arguments were put forth.
So, how do we evolve current practice??
But first...
## If software is so important, software is fair game for peer review
I promise this wasn't a stealth goal of my original blog post but people realize that an obvious conclusion here is that software is fully fair game for in depth peer review, right? (Never mind that most scientists probably aren't capable of doing good peer review of code, or that any reasonably strong code review requirements would mean that virtually no more software would be published - an effective but rather punitive way to ensure only good software is published in science :)
A few weeks back I received a response to my review of an application note, and the senior author objected strenuously to my reviewing their actual software in any way. It really pissed me off, frankly -- I was pretty positive about their packaged software and made some suggestions for how they could improve its presentation to others, and basically got back a punch to the nose asking how dare I make such suggestions. As part of my own rather intemperate response, I said:
This is an application note. The application itself is certainly fair game for review...
How much angrier would this person have been if I'd rejected the paper because I actually had comments on edge cases in the source code??
Two years ago now we had another big eruption ("big" in the Twitter sense, at least) around code review. A year even before that I proposed optional review criteria for bioinformatics papers that my students, at least, have started to use to do reviews.
In all that time very little has changed. There are three objections that I've heard in these last three years that bear up over time --
First, scientists neither know how to review code nor how to write reasonable code; this would lead at best to inconsistency in reviews, or at worst simply lead to a massive waste of time.
Second, I am not aware of any code review guidelines or standards for scientific code. Code review in industry has at least some basic good practices; code review in science is a different beast.
Third, code review can be used to unfairly block publication. This came up again recently (READ THAT COMMENT) and I think it's a great reason to worry about code review as a way to block publication. I still don't know how to deal with this but we need some guidelines for editors.
The bottom line is that if software is fair game for peer review, then we need a trained and educated body of reviewers - just as we do for molecular methods, biological sequencing, and statistics. This will inevitably involve the evolution of the community of practice around both software generation (s...l...o...w...l...y... happening) and software peer review (<envision birds chirping in the absence of conversation>).
(One solution I think I'm going to try is this: I'm going to ask the Software Carpentry community for a volunteer to do code review for every computational paper I edit, and I will provide suggested (optional) guidelines. Evil? Maybe so. Effective? I hope so.)
## We need some guidelines and position papers.
Of the discussion around computation as a primary research product, Dan Katz asked,
"I wonder if a collaborative paper on this would find a home somewhere?"
Yes. To break out of the bubble, I think we need a bunch of position papers and guidelines on this sort of thing, frankly. It's clear to me that the online community has a tremendous amount of wisdom to offer, but we are living in a bubble, and we need to communicate outside of that -- just as the open access and open data folk are.
One important note: we need simple, clear, minimum requirements, with broadly relevant justifications. Otherwise we will fail to convince or be useful to anyone, including our own community.
A few ideas:
• We need a clear, concise, big-tent writeup of "why software is important, and why it should be OSS and reviewed when published";
• We need to discuss good minimum requirements in the near term for code review, and figure out what some end goals are;
• We need some definitions of what "responsible conduct of computational research" looks like (Responsible Conduct of Research is a big thing in the US, now; I think it's a useful concept to employ here).
• We need some assessment metrics (via @kaythaney) that disentangle "responsible conduct of research" (a concept that nobody should disagree with) from "open science" (which some people disagree with :).
and probably a bunch of other things... what else do we need, and how should we move forward?
--titus
### Filipe Saraiva
#### Cantor in KDE Applications 15.04
KDE Applications 15.04 release brings a new version of the scientific programming software Cantor, with a lot of news. I am specially happy with this release because I worked in several parts of these new features. =)
Come with me™ and let’s see what is new in Cantor.
## Cantor ported to Qt5/KF5
Cantor Qt5/KF5 + Breeze theme. In the image it is possible to see the terminal/worksheet, variable management panel, syntax highlighting, code completion, and the standard interface
I started the Cantor port to Qt5/KF5 during previous LaKademy and I continued the development along the year. Maybe I had pushed code from 5 different countries since the beginning of this work.
The change for this new technology was successfully completed, and for the moment we don’t notice any feature missed or new critical bug. All the backends and plugins were ported, and some new bugs created during this work were fixed.
We would like to ask for Cantor users to report any problem or bug in bugzilla. Anyway, the software is really very stable.
When you run Cantor Qt5/KF5 version on the first time, the software will look for Cantor Qt4 configurations and, if it exists, the configurations will be automagically migrated to Cantor Qt5/KF5.
## Backend for Python 3
In Season of KDE 2014 I was the mentor of Minh Ngo in the project to create a backend for Python 3, increasing the number of backends in Cantor to 10!
Backend selection screen: Python 3 and their 9 brothers
The backend developed by Minh uses D-Bus protocol to allow communication between Cantor and Python 3. This architecture is different of Python 2, but it is present in others backends, as in the backend for R.
The cool thing is Cantor can be interesting for pythonistas using Python 2 and/or Python 3 now. We would like to get feedback from you, guys!
## Icon!
Cantor first release was originally in 2009, with KDE SC 4.4. Since that date the software did not have an icon.
The Cantor Qt5/KF5 release marks a substantial change in the development of the application, then it is also a good time to release an icon to the software.
Cantor icon
The art is excellent! It presents the idea of Cantor: a blackboard to you write and develop your equations and formulas while scratches his head and think “and now, what I need to do to solve it?”. =)
Thank you Andreas Kainz and Uri Herrera, members of VDG team and authors of Cantor icon!
## Other changes and bug fixes
Most bugs added in the Qt5/KF5 port were fixed before the release.
There are some small changes to be cited: in KNewStuff categories world, “Python2″ category was changed to “Python 2″ and “Python 3″ category was added; the automatic loading of pylab module in Python backends was dropped; now it is possible to run Python commands mixed with comments in the worksheet; and more.
You can see a complete log of commits, bugfixes, and new features added in this release in this page.
## Future works
As future work maybe the high-priority for this moment is to drop KDELibs4Support from Cantor. Lucas developed part of this work and we would like to finish it for the next release.
I intend to test if D-Bus communication can be a good solution for Scilab backend. Another task is to redesign the graphical generation assistants of Python backends. A long-term work is to follow the creation of Jupyter project, the future of IPython notebooks. If Cantor can to be compatible with Jupyter, it will be really nice for users and to encourage the collaboration between different communities interested in scientific programming and open science.
I will take advantage of the Cantor Qt5/KF5 release to write about how to use Cantor in two different ways: the Matlab way and the IPython notebooks way. Keep your eyes in the updates from this blog! =)
If you would like to help in Cantor development, please contact me or mail kde-edu maillist and let’s talk about bug fixes, development of new features, and more.
## Donations to KDE Brasil – LaKademy 2015!
If you would like to support my work, please make a donation to KDE Brasil. We will host the KDE Latin-American Summit – LaKademy and we need some money to put some latin-american contributors to work together face-to-face. I will focus my LaKademy work in the previously mentioned future works.
You can read more about LaKademy in this dot.KDE history. This page in English explain how to donate. There is other page with the same content in Spanish.
## April 23, 2015
### Titus Brown
#### More on scientific software
So I wrote this thing that got an awful lot of comments, many telling me that I'm just plain wrong. I think it's impossible to respond comprehensively :). But here are some responses.
## What is, what could be, and what should be
In that blog post, I argued that software shouldn't be considered a primary output of scientific research. But I completely failed to articulate a distinction between what we do today with respect to scientific software, what we could be doing in the not-so-distant future, and what we should be doing. Worse, I mixed them all up!
Peer reviewed publications and grants are the current coin of the realm. When we submit papers and grants for peer review, we have to deal with what those reviewers think right now. In bioinformatics, this largely means papers get evaluated on their perceived novelty and impact (even in impact-blind journals). Software papers are generally evaluated poorly on these metrics, so it's hard to publish bioinformatics software papers in visible places, and it's hard to argue in grants to the NIH (and most of the biology-focused NSF) that pure software development efforts are worthwhile. This is what is, and it makes it hard for methods+software research to get publications and funding.
Assuming that you agree that methods+software research is important in bioinformatics, what could we be doing in the near distant future to boost the visibility of methods+software? Giving DOIs to software is one way to accrue credit to software that is highly used, but citations take a long time to pile up, reviewers won't know what to expect in terms of numbers (50 citations? is that a lot?), and my guess is that they will be poorly valued in important situations like funding and advancement. It's an honorable attempt to hack the system and software DOIs are great for other purposes, but I'm not optimistic about their near- or middle-term impact.
We could also start to articulate values and perspectives to guide reviewers and granting systems. And this is what I'd like to do. But first, let me rant a bit.
I think people underestimate the hidden mass in the scientific iceberg. Huge amounts of money are spent on research, and I would bet that there are at least twenty thousand PI-level researchers around the world in biology. In biology-related fields, any of these people may be called upon to review your grant or your paper, and their opinions will largely be, well, their own. To get published, funded, or promoted, you need to convince some committee containing these smart and opinionated researchers that what you're doing is both novel and impactful. To do that, you have to appeal largely to values and beliefs that they already hold.
Moreover, this set of researchers - largely made of people who have reached tenured professor status - sits on editorial boards, funding agency panels, and tenure and promotion committees. None of these boards and funding panels exist in a vacuum, and while to some extent program managers can push in certain directions, they are ultimately beholden to the priorities of the funding agency, which are (in the best case) channeled from senior scientists.
If you wonder why open access took so damn long to happen, this is one reason - the cultural "mass" of researchers that needs to shift their opinions is huge and unwieldy and resistant to change. And they are largely invisible, and subject to only limited persuasion.
One of the most valuable efforts we can make is to explore what we should be doing, and place it on a logical and sensical footing, and put it out there. For example, check out the CRA's memo on best practices in Promotion and Tenure of Interdisciplinary Faculty - great and thoughtful stuff, IMO. We need a bunch of well thought out opinions in this vein. What guidelines do we want to put in place for evaluating methods+software? How should we evaluate methods+software researchers for impact? When we fund software projects, what should we be looking for?
And that brings me to what we should be doing, which is ultimately what I am most interested in. For example, I must admit to deep confusion about what a maturity model for bioinformatics software should look like; this feeds into funding requests, which ultimately feeds into promotion and tenure. I don't know how to guide junior faculty in this area either; I have lots of opinions, but they're not well tested in the marketplace of ideas.
I and others are starting to have the opportunity to make the case for what we should be doing in review panels; what case should we make?
It is in this vein, then, that I am trying to figure out what value to place on software itself, and I'm interested in how to promote methods+software researchers and research. Neil Saunders had an interesting comment that I want to highlight here: he said,
My own feeling is that phrases like "significant intellectual contribution" are just unhelpful academic words,
I certainly agree that this is an imprecise concept, but I can guarantee that in the US, this is one of the three main questions for researchers at hiring, promotion, and tenure. (Funding opportunities and fit are my guesses for the other two.) So I would push on this point: researchers need to appear to have a clear intellectual contribution at every stage of the way, whatever that means. What it means is what I'm trying to explore.
## Software is a tremendously important and critical part of the research endeavor
...but it's not enough. That's my story, and I'm sticking to it :).
I feel like the conversation got a little bit sidetracked by discussions of Nobel Prizes (mea partly culpa), and I want to discuss PhD theses instead. To get a PhD, you need to do some research; if you're a bioinformatics or biology grad student who is focused on methods+software, how much of that research can be software, and what else needs to be there?
And here again I get to dip into my own personal history.
I spent 9 years in graduate school. About 6 years into my PhD, I had a conversation with my advisor that went something like this:
Me, age ~27 - "Hey, Eric, I've got ~two first-author papers, and another one or two coming, along with a bunch of papers. How about I defend my PhD on the basis of that work, and stick around to finish my experimental work as a postdoc?"
Eric - blank look "All your papers are on computational methods. None of them count for your PhD."
Me - "Uhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhmmmmmmmmmm..."
(I did eventually graduate, but only after three more years of experiments.)
In biology, we have to be able to defend our computational contributions in the face of an only slowly changing professoriate. And I'm OK with that, but I think we should make it clear up front.
Since then, I've graduated three (soon to be five, I hope!) graduate students, one in biology and two in CS. In every single case, they've done a lot of hacking. And in every single case they've been asked to defend their intellectual contribution. This isn't just people targeting my students - I've sat on committees where students have produced masses of experimental data, and if they weren't prepared to defend their experimental design, their data interpretation, and the impact and significance of their data interpretation, they weren't read to defend. This is a standard part of the PhD process at Caltech, at MSU, and presumably at UC Davis.
So: to successfully receive a PhD, you should have to clearly articulate the problem you're tackling, its place in the scientific literature, the methods and experiments you're going to use, the data you got, the interpretation you place on that data, and the impact of their results on current scientific thinking. It's a pretty high bar, and one that I'm ok with.
One of the several failure modes I see for graduate students is the one where graduate students spend a huge amount of time developing software and more or less assume that this work will lead to a PhD. Why would they be thinking that?
• Their advisor may not be particularly computational and may be giving poor guidance (which includes poorly explained criteria).
• Their advisor may be using them (intentionally or unintentionally) - effective programmers are hard to find.
• The grad student may be resistant to guidance.
I ticked all of these as a graduate student, but I had the advantage of being a 3rd-generation academic, so I knew the score. (And I still ran into problems.) In my previous blog post, I angered and upset some people by my blunt words (I honestly didn't think "grad student hacker fallacy" was so rude ;( but it's a real problem that I confront regularly.
Computational PhD students need to do what every scientific PhD student needs to do: clearly articulate their problem, place it in the scientific literature, define the computational methods and experiments they're going to do/have done, explain the data and their interpretation of it, and explore how it impacts science. Most of this involves things other than programming and running software! It's impossible to put down percent effort estimates that apply broadly, but my guess is that PhD students should spend at least a year understanding your results and interpreting and explaining their work.
Conveniently, however, once you've done that for your PhD, you're ready to go in the academic world! These same criteria (expanded in scope) apply to getting a postdoc, publishing as a postdoc, getting a faculty position, applying for grants, and getting tenure. Moreover, I believe many of the same criteria apply broadly to research outside of academia (which is one reason I'm still strongly +1 on getting a PhD, no matter your ultimate goals).
(Kyle Cranmer's comment on grad student efforts here was perfect.)
## Software as...
As far as software being a primary product of research -- Konrad Hinsen nails it. It's not, but neither are papers, and I'm good with both statements :). Read his blog post for the full argument. The important bit is that very little stands on its own; there always needs to be communication effort around software, data, and methods.
Ultimately, I learned a lot by admitting confusion! Dan Katz and Konrad Hinsen pointed out that software is communication, and Kai Blin drew a great analogy between software and experimental design. These are perspectives that I hadn't seen said so clearly before and they've really made me think differently; both are interesting and provocative analogies and I'm hoping that we can develop them further as a community.
## How do we change things?
Kyle Cranmer and Rory Kirchner had a great comment chain on broken value systems and changing the system. I love the discussion, but I'm struggling with how to respond. My tentative and mildly unhappy conclusion is that I may have bought into the intellectual elitism of academia a bit too much (see: third generation academic), but this may also be how I've gotten where I am, so... mixed bag? (Rory made me feel old and dull, too, which is pretty cool in a masochistic kind of way.)
One observation is that, in software, novelty is cheap. It's very, very easy to tweak something minorly, and fairly easy to publish it without generating any new understanding. How do we distinguish a future Heng Li or an Aaron Quinlan (who have enabled new science by cleanly solving "whole classes of common problems that you don't even have to think about anymore") from humdrum increment, and reward them properly in the earlier stages of their career? I don't know, but the answer has to be tied to advancing science, which is hard to measure on any short timescale. (Sean Eddy's blog post has the clearest view on solutions that I've yet seen.)
Another observation (nicely articulated by Daisie Huang) is that (like open data) this is another game theoretic situation, where the authors of widely used software sink their time and energy into the community but don't necessarily gain wide recognition for their efforts. There's a fat middle ground of software that's reasonably well used but isn't samtools, and this ecosystem needs to be supported. This is much harder to argue - it's a larger body of software, it's less visible, and it's frankly much more expensive to support. (Carl Boettiger's comment is worth reading here.) The funding support isn't there, although that might change in the next decade. (This is the proximal challenge for me, since I place my own software, khmer, in this "fat middle ground"; how do I make a clear argument for funding?)
Kyle Cranmer and others pointed to some success in "major instrumentation" and methods-based funding and career paths in physics (help, can't find link/tweets!). This is great, but I think it's also worth discussing the overall scale of things. Physics has a few really big and expensive instruments, with a few big questions, and with thousands of engineers devoted to them. Just in sequencing, biology has thousands (soon millions) of rather cheap instruments, devoted to many thousands of questions. If my prediction that software will "eat" this part of the world becomes true, we will need tens of thousands of data intensive biologists at a minimum, most working to some large extent on data analysis and software. I think the scale of the need here is simply much, much larger than in physics.
I am supremely skeptical of the idea that universities as we currently conceive of them are the right home for stable, mature software development. We either need to change universities in the right way (super hard) or find other institutions (maybe easier). Here, the model to watch may well be the Center for Open Science, which produces the Open Science Framework (among others). My interpretation is that they are trying to merge scientific needs with the open source development model. (Tellingly, they are doing so largely with foundation funding; the federal funding agencies don't have good mechanisms for funding this kind of thing in biology, at least.) This may be the right model (or at least on the path towards one) for sustained software development in the biological sciences: have an institution focused on sustainability and quality, with a small diversity of missions, that can afford to spend the money to keep a number of good software engineers focused on those missions.
Thanks, all, for the comments and discussions!
--titus
## April 22, 2015
### Gaël Varoquaux
#### MLOSS: machine learning open source software workshop @ ICML 2015
Note
This year again we will have an exciting workshop on the leading-edge machine-learning open-source software. This subject is central to many, because software is how we propagate, reuse, and apply progress in machine learning.
Want to present a project? The deadline for the call for papers is Apr 28th, in a few days : http://mloss.org/workshop/icml15/
The workshop will be help at the ICML conference, in Lille France, on July 10th. ICML –International Conference in Machine Learning– is the leading venue for academic research in machine learning. It’s a fantastic place to hold such a workshop, as the actors of theoretical progress are all around. Software is the bridge that brings this progress beyond papers.
There is a long tradition of MLOSS workshop, with one every year and a half. Last time, at NIPS 2013, I could feel a bit of a turning point, as people started feeling that different software slotted together, to create an efficient and state-of-the art working environment. For this reason, we have entitled this year’s workshop ‘open ecosystems’, stressing that contributions in the scope of the workshop, that build a thriving work environment, are not only machine learning software, but also better statistics or numerical tools.
We have two keynotes with important contributions to such ecosystems:
• John Myles White (Facebook), lead developer of Julia statistics and machine learning: “Julia for machine learning: high-level syntax with compiled-code speed”
• Matthew Rocklin (Continuum Analytics), developer of Python computational tools, in particular Blaze (confirmed): “Blaze, a modern numerical engine with out-of-core and out-of-order computations”.
There will be also a practical presentation on how to set up an open-source project, discussing hosting, community development, quality assurance, license choice, by yours truly.
## April 21, 2015
### Titus Brown
#### Is software a primary product of science?
Update - I've written Yet Another blog post, More on scientific software on this topic. I think this blog post is a mess so you should read that one first ;).
This blog post was spurred by a simple question from Pauline Barmby on Twitter. My response didn't, ahem, quite fit in 144 characters :).
First, a little story. (To paraphrase Greg Wilson, "I tell a lot of stories. Some of them aren't true. But this one is!")
When we were done writing Best Practices for Scientific Computing, we tried submitting it to a different high-profile journal than the one that ultimately accepted it (PLoS Biology, where it went on to become the most highly read article of 2014 in PLoS Biology). The response from the editor went something like this: "We recognize the importance of good engineering, but we regard writing software as equivalent to building a telescope - it's important to do it right, but we don't regard a process paper on how to build telescopes better as an intellectual contribution." (Disclaimer: I can't find the actual response, so this is a paraphrase, but it was definitely a "no" and for about that reason.)
## Is scientific software like instrumentation?
When I think about scientific software as a part of science, I inevitably start with its similarities to building scientific instruments. New instrumentation and methods are absolutely essential to scientific progress, and it is clear that good engineering and methods development skills are incredibly helpful in research.
So, why did the editors at High Profile Journal bounce our paper? I infer that they drew exactly this parallel and thought no further.
But scientific software is only somewhat like new methods or instrumentation.
First, software can spread much faster and be used much more like a black box than most methods, and instrumentation inevitably involves either construction or companies that act as middlemen. With software, it's like you're shipping kits or plans for 3-D printing - something that is as close to immediately usable as it comes. If you're going to hand someone an immediately usable black box (and pitch it as such), I would argue that you should take a bit more care in building said black box.
Second, complexity in software scales much faster than in hardware (citation needed). This is partly due to human nature & a failure to think long-term, and partly due to the nature of software - software can quickly have many more moving parts than hardware, and at much less (short term) cost. Frankly, most software stacks resemble massive Rube Goldberg machines (read that link!) This means that different processes are needed here.
Third, at least in my field (biology), we are undergoing a transition to data intensive research, and software methods are becoming ever more important. There's no question that software is going to eat biology just like it's eating the rest of the world, and an increasingly large part of our primary scientific output in biology is going to hinge directly on computation (think: annotations. 'nuff said).
If we're going to build massively complex black boxes that under-pin all of our science, surely that means that the process is worth studying intellectually?
## Is scientific software a primary intellectual output of science?
No.
I think concluding that it is is an example of the logical fallacy "affirming the consequent" - or, "confusion of necessity and sufficiency". I'm not a logician, but I would phrase it like this (better phrasing welcome!) --
Good software is necessary for good science. Good science is an intellectual contribution. Therefore good software is an intellectual contribution.
Hopefully when phrased that way it's clear that it's nonsense.
I'm naming this "the fallacy of grad student hackers", because I feel like it's a common failure mode of grad students that are good at programming. I actually think it's a tremendously dangerous idea that is confounding a lot of the discussion around software contributions in science.
To illustrate this, I'll draw the analog to experimental labs: you may have people who are tremendously good at doing certain kinds of experiments (e.g. expert cloners, or PCR wizards, or micro-injection aficionados, or WMISH bravados) and with whom you can collaborate to rapidly advance your research. They can do things that you can't, and they can do them quickly and well! But these people often face dead ends in academia and end up as eterna-postdocs, because (for better or for worse) what is valued for first authorship and career progression is intellectual contribution, and doing experiments well is not sufficient to demonstrate an intellectual contribution. Very few people get career advancement in science by simply being very good at a technique, and I believe that this is OK.
Back to software - writing software may become necessary for much of science but I don't think it should ever be sufficient as a primary contribution. Worse, it can become (often becomes?) an engine of procrastination. Admittedly, that procrastination leads to things like IPython Notebook, so I don't want to ding it, but neither are all (or even most ;) grad students like Fernando Perez, either.
## Let's admit it, I'm just confused
This leaves us with a conundrum.
Software is clearly a force multiplier - "better software, better research!.
However, I don't think it can be considered a primary output of science. Dan Katz said, "Nobel prizes have been given for inventing instruments. I'm eagerly awaiting for one for inventing software [sic]" -- but I think he's wrong. Nobels have been given because of the insight enabled by inventing instruments, not for inventing instruments. (Corrections welcome!) So while I, too, eagerly await the explicit recognition that software can push scientific insight forward in biology, I am not holding my breath - I think it's going to look much more like the 2013 Chemistry Nobel, which is about general computational methodology. (My money here would be on a Nobel in Medicine for genome assembly methods, which should follow on separately from massively parallel sequencing methods and shotgun sequencing - maybe Venter, Church, and Myers/Pevzner deserve three different Nobels?)
Despite that, we do need to incentivize it, especially in biology but also more generally. Sean Eddy wrote AN AWESOME BLOG POST ON THIS TOPIC in 2010 (all caps because IT'S AWESOME AND WHY HAVEN'T WE MOVED FURTHER ON THIS <sob>). This is where DOIs for software usually come into play - hey, maybe we can make an analogy between software and papers! But I worry that this is a flawed analogy (for reasons outlined above) and will simply support the wrong idea that doing good hacking is sufficient for good science.
We also have a new problem - the so-called Big Data Brain Drain, in which it turns out that the skills that are needed for advancing science are also tremendously valuable in much more highly paid jobs -- much like physics number crunchers moving to finance, research professors in biology face a future where all our grad students go on to make more than us in tech. (Admittedly, this is only a problem if we think that more people clicking on ads is more important than basic research.) Jake Vanderplas (the author of the Big Data Brain Drain post) addressed potential solutions to this in Hacking Academia, about which I have mixed feelings. While I love both Jake and his blog post (platonically), there's a bit too much magical thinking in that post -- I don't see (m)any of those solutions getting much traction in academia.
The bottom line for me is that we need to figure it out, but I'm a bit stuck on practical suggestions. Natural selection may apply -- whoever figures this out in biology (basic research institutions and/or funding bodies) will have quite an edge in advancing biomedicine -- but natural selection works across multiple generations, and I could wish for something a bit faster. But I don't know. Maybe I'll bring it up at SciFoo this year - "Q: how can we kill off the old academic system faster?" :)
I'll leave you with two little stories.
## The problem, illustrated
In 2009, we started working on what would ultimately become Pell et al., 2012. We developed a metric shit-ton of software (that's a scientific measure, folks) that included some pretty awesomely scalable sparse graph labeling approaches. The software worked OK for our problem, but was pretty brittle; I'm not sure whether or not our implementation of this partitioning approach is being used by anyone else, nor am I sure if it should be :).
However, the paper has been a pretty big hit by traditional scientific metrics! We got it into PNAS by talking about the data structure properties and linking physics, computer science, and biology together. It helped lead directly to Chikhi and Rizk (2013), and it has been cited a whole bunch of times for (I think) its theoretical contributions. Yay!
Nonetheless, the incredibly important and tricky details of scalably partitioning 10 bn node graphs were lost from that paper, and the software was not a big player, either. Meanwhile, Dr. Pell left academia and moved on to a big software company where (on his first day) he was earning quite a bit more than me (good on him! I'd like a 5% tithe, though, in the future :) :). Trust me when I say that this is a net loss to academia.
Summary: good theory, useful ideas, lousy software. Traditional success. Lousy outcomes.
## A contrapositive
In 2011, we figured out that linear compression ratios for sequence data simply weren't going to cut it in the face of the continued rate of data generation, and we developed digital normalization, a deceptively simple idea that hasn't really been picked up by the theoreticians. Unlike the Pell work above, it's not theoretically well studied at all. Nonetheless, the preprint has a few dozen citations (because it's so darn useful) and the work is proving to be a good foundation for further research for our lab. Perhaps the truest measure of its memetic success is that it's been reimplemented by at least three different sequencing centers.
The software is highly used, I think, and many of our efforts on the khmer software have been aimed at making diginorm and downstream concepts more robust.
Summary: lousy theory, useful ideas, good software. Nontraditional success. Awesome outcomes.
## Ways forward?
I simply don't know how to chart a course forward. My current instinct (see below) is to shift our current focus much more to theory and ideas and further away from software, largely because I simply don't see how to publish or fund "boring" things like software development. (Josh Bloom has an excellent blog post that relates to this particular issue: Novelty Squared)
I've been obsessing over these topics of software and scientific focus recently (see The three porridge bowls of scientific software development and Please destroy this software after publication. kthxbye) because I'm starting to write a renewal for khmer's funding. My preliminary specific aims look something like this:
Aim 1: Expand low memory and streaming approaches for biological sequence analysis.
Aim 2: Develop graph-based approaches for analyzing genomic variation.
Aim 3: Optimize and extend a general purpose graph analysis library
Importantly, everything to do with software maintenance, support, and optimization is in Aim 3 and is in fact only a part of that aim. I'm not actually saddened by that, because I believe that software is only interesting because of the new science it enables. So I need to sell that to the NIH, and there software quality is (at best) a secondary consideration.
On the flip side, by my estimate 75% of our khmer funding is going to software maintenance, most significantly in paying down our technical debt. (In the grant I am proposing to decrease this to ~50%.)
I'm having trouble justifying this dichotomy mentally myself, and I can only imagine what the reviewers might think (although hopefully they will only glance at the budget ;).
So this highlights one conundrum: given my estimates and my priorities, how would you suggest I square these stated priorities with my funding allocations? And, in these matters, have I been wrong to focus on software quality, or should I have focused instead on accruing technical debt in the service of novel ideas and functionality? Inquiring minds want to know.
--titus
### Matthew Rocklin
#### Profiling Data Throughput
This work is supported by Continuum Analytics and the XDATA Program as part of the Blaze Project
Disclaimer: This post is on experimental/buggy code.
tl;dr We measure the costs of processing semi-structured data like JSON blobs.
## Semi-structured Data
Semi-structured data is ubiquitous and computationally painful. Consider the following JSON blobs:
{'name': 'Alice', 'payments': [1, 2, 3]}
{'name': 'Bob', 'payments': [4, 5]}
{'name': 'Charlie', 'payments': None}
This data doesn’t fit nicely into NumPy or Pandas and so we fall back to dynamic pure-Python data structures like dicts and lists. Python’s core data structures are surprisingly good, about as good as compiled languages like Java, but dynamic data structures present some challenges for efficient parallel computation.
## Volume
Semi-structured data is often at the beginning of our data pipeline and so often has the greatest size. We may start with 100GB of raw data, reduce to 10GB to load into a database, and finally aggregate down to 1GB for analysis, machine learning, etc., 1kB of which becomes a plot or table.
Data Bandwidth (MB/s) In Parallel (MB/s)
Disk I/O500500
Decompression100500
Deserialization50250
In-memory computation2000oo
Shuffle930
Common solutions for large semi-structured data include Python iterators, multiprocessing, Hadoop, and Spark as well as proper databases like MongoDB and ElasticSearch. Two months ago we built dask.bag, a toy dask experiment for semi-structured data. Today we’ll strengthen the dask.bag project and look more deeply at performance in this space.
We measure performance with data bandwidth, usually in megabytes per second (MB/s). We’ll build intuition for why dealing with this data is costly.
## Dataset
As a test dataset we play with a dump of GitHub data from https://www.githubarchive.org/. This data records every public github event (commit, comment, pull request, etc.) in the form of a JSON blob. This data is representative fairly representative of a broader class of problems. Often people want to do fairly simple analytics, like find the top ten committers to a particular repository, or clean the data before they load it into a database.
We’ll play around with this data using dask.bag. This is both to get a feel for what is expensive and to provide a cohesive set of examples. In truth we won’t do any real analytics on the github dataset, we’ll find that the expensive parts come well before analytic computation.
Items in our data look like this:
>>> import json
>>> path = '/home/mrocklin/data/github/2013-05-0*.json.gz'
({u'actor': u'mjcramer',
u'actor_attributes': {u'gravatar_id': u'603762b7a39807503a2ee7fe4966acd1',
u'type': u'User'},
u'created_at': u'2013-05-01T00:01:28-07:00',
u'master_branch': u'master',
u'ref': None,
u'ref_type': u'repository'},
u'public': True,
u'repository': {u'created_at': u'2013-05-01T00:01:28-07:00',
u'description': u'',
u'fork': False,
u'forks': 0,
u'has_issues': True,
u'has_wiki': True,
u'id': 9787210,
u'master_branch': u'master',
u'name': u'settings',
u'open_issues': 0,
u'owner': u'mjcramer',
u'private': False,
u'pushed_at': u'2013-05-01T00:01:28-07:00',
u'size': 0,
u'stargazers': 0,
u'url': u'https://github.com/mjcramer/settings',
u'watchers': 0},
u'type': u'CreateEvent',
u'url': u'https://github.com/mjcramer/settings'},)
## Disk I/O and Decompression – 100-500 MB/s
Data Bandwidth (MB/s)
Parallel Read from disk with gzip.open500
A modern laptop hard drive can theoretically read data from disk to memory at 800 MB/s. So we could burn through a 10GB dataset in fifteen seconds on our laptop. Workstations with RAID arrays can do a couple GB/s. In practice I get around 500 MB/s on my personal laptop.
In [1]: import json
In [2]: import dask.bag as db
In [3]: from glob import glob
In [4]: path = '/home/mrocklin/data/github/2013-05-0*.json.gz'
In [5]: %time compressed = '\n'.join(open(fn).read() for fn in glob(path))
CPU times: user 75.1 ms, sys: 1.07 s, total: 1.14 s
Wall time: 1.14 s
In [6]: len(compressed) / 0.194 / 1e6 # MB/s
508.5912175438597
To reduce storage and transfer costs we often compress data. This requires CPU effort whenever we want to operate on the stored values. This can limit data bandwidth.
In [7]: import gzip
In [8]: %time total = '\n'.join(gzip.open(fn).read() for fn in glob(path))
CPU times: user 12.2 s, sys: 18.7 s, total: 30.9 s
Wall time: 30.9 s
In [9]: len(total) / 30.9 / 1e6 # MB/s total bandwidth
Out[9]: 102.16563844660195
In [10]: len(compressed) / 30.9 / 1e6 # MB/s compressed bandwidth
Out[10]: 18.763559482200648
So we lose some data bandwidth through compression. Where we could previously process 500 MB/s we’re now down to only 100 MB/s. If we count bytes in terms of the amount stored on disk then we’re only hitting 18 MB/s. We’ll get around this with multiprocessing.
## Decompression and Parallel processing – 500 MB/s
Fortunately we often have more cores than we know what to do with. Parallelizing reads can hide much of the decompression cost.
In [12]: import dask.bag as db
In [13]: %time nbytes = db.from_filenames(path).map(len).sum().compute()
CPU times: user 130 ms, sys: 402 ms, total: 532 ms
Wall time: 5.5 s
In [14]: nbytes / 5.5 / 1e6
Out[14]: 573.9850932727272
Dask.bag infers that we need to use gzip from the filename. Dask.bag currently uses multiprocessing to distribute work, allowing us to reclaim our 500 MB/s throughput on compressed data. We also could have done this with multiprocessing, straight Python, and a little elbow-grease.
## Deserialization – 30 MB/s
Data Bandwidth (MB/s)
Once we decompress our data we still need to turn bytes into meaningful data structures (dicts, lists, etc..) Our GitHub data comes to us as JSON. This JSON contains various encodings and bad characters so, just for today, we’re going to punt on bad lines. Converting JSON text to Python objects explodes out in memory a bit, so we’ll consider a smaller subset for this part, a single day.
In [20]: def loads(line):
... except: return None
In [21]: path = '/home/mrocklin/data/github/2013-05-01-*.json.gz'
In [22]: lines = list(db.from_filenames(path))
In [23]: %time blobs = list(map(loads, lines))
CPU times: user 10.7 s, sys: 760 ms, total: 11.5 s
Wall time: 11.3 s
In [24]: len(total) / 11.3 / 1e6
Out[24]: 33.9486321238938
In [25]: len(compressed) / 11.3 / 1e6
Out[25]: 6.2989179646017694
So in terms of actual bytes of JSON we can only convert about 30MB per second. If we count in terms of the compressed data we store on disk then this looks more bleak at only 6 MB/s.
### This can be improved by using faster libraries – 50 MB/s
The ultrajson library, ujson, is pretty slick and can improve our performance a bit. This is what Pandas uses under the hood.
In [28]: import ujson
... except: return None
In [30]: %time blobs = list(map(loads, lines))
CPU times: user 6.37 s, sys: 1.17 s, total: 7.53 s
Wall time: 7.37 s
In [31]: len(total) / 7.37 / 1e6
Out[31]: 52.05149837177748
In [32]: len(compressed) / 7.37 / 1e6
Out[32]: 9.657771099050203
### Or through Parallelism – 150 MB/s
This can also be accelerated through parallelism, just like decompression. It’s a bit cumbersome to show parallel deserializaiton in isolation. Instead we’ll show all of them together. This will under-estimate performance but is much easier to code up.
In [33]: %time db.from_filenames(path).map(loads).count().compute()
CPU times: user 32.3 ms, sys: 822 ms, total: 854 ms
Wall time: 2.8 s
In [38]: len(total) / 2.8 / 1e6
Out[38]: 137.00697964285717
In [39]: len(compressed) / 2.8 / 1e6
Out[39]: 25.420633214285715
## Mapping and Grouping - 2000 MB/s
Data Bandwidth (MB/s)
Simple Python operations1400
Complex CyToolz operations2600
Once we have data in memory, Pure Python is relatively fast. Cytoolz moreso.
In [55]: %time set(d['type'] for d in blobs)
CPU times: user 162 ms, sys: 123 ms, total: 285 ms
Wall time: 268 ms
Out[55]:
{u'CommitCommentEvent',
u'CreateEvent',
u'DeleteEvent',
u'FollowEvent',
u'ForkEvent',
u'GistEvent',
u'GollumEvent',
u'IssueCommentEvent',
u'IssuesEvent',
u'MemberEvent',
u'PublicEvent',
u'PullRequestEvent',
u'PullRequestReviewCommentEvent',
u'PushEvent',
u'WatchEvent'}
In [56]: len(total) / 0.268 / 1e6
Out[56]: 1431.4162052238805
In [57]: import cytoolz
In [58]: %time _ = cytoolz.groupby('type', blobs) # CyToolz FTW
CPU times: user 144 ms, sys: 0 ns, total: 144 ms
Wall time: 144 ms
In [59]: len(total) / 0.144 / 1e6
Out[59]: 2664.024604166667
So slicing and logic are essentially free. The cost of compression and deserialization dominates actual computation time. Don’t bother optimizing fast per-record code, especially if CyToolz has already done so for you. Of course, you might be doing something expensive per record. If so then most of this post isn’t relevant for you.
## Shuffling - 5-50 MB/s
Data Bandwidth (MB/s)
Naive groupby with on-disk Shuffle25
Clever foldby without Shuffle250
For complex logic, like full groupbys and joins, we need to communicate large amounts of data between workers. This communication forces us to go through another full serialization/write/deserialization/read cycle. This hurts. And so, the single most important message from this post:
That being said, people will inevitably ignore this advice so we need to have a not-terrible fallback.
In [62]: %time dict(db.from_filenames(path)
... .groupby('type')
... .map(lambda (k, v): (k, len(v))))
CPU times: user 46.3 s, sys: 6.57 s, total: 52.8 s
Wall time: 2min 14s
Out[62]:
{'CommitCommentEvent': 17889,
'CreateEvent': 210516,
'DeleteEvent': 14534,
'FollowEvent': 35910,
'ForkEvent': 67939,
'GistEvent': 7344,
'GollumEvent': 31688,
'IssueCommentEvent': 163798,
'IssuesEvent': 102680,
'MemberEvent': 11664,
'PublicEvent': 1867,
'PullRequestEvent': 69080,
'PullRequestReviewCommentEvent': 17056,
'PushEvent': 960137,
'WatchEvent': 173631}
In [63]: len(total) / 134 / 1e6 # MB/s
Out[63]: 23.559091
This groupby operation goes through the following steps:
2. Decompress GZip
3. Deserialize with ujson
4. Do in-memory groupbys on chunks of the data
5. Reserialize with msgpack (a bit faster)
6. Append group parts to disk
7. Read in new full groups from disk
8. Deserialize msgpack back to Python objects
9. Apply length function per group
Some of these steps have great data bandwidths, some less-so. When we compound many steps together our bandwidth suffers. We get about 25 MB/s total. This is about what pyspark gets (although today pyspark can parallelize across multiple machines while dask.bag can not.)
Disclaimer, the numbers above are for dask.bag and could very easily be due to implementation flaws, rather than due to inherent challenges.
>>> import pyspark
>>> sc = pyspark.SparkContext('local[8]')
>>> rdd = sc.textFile(path)
... .keyBy(lambda d: d['type'])
... .groupByKey()
... .map(lambda (k, v): (k, len(v)))
... .collect())
I would be interested in hearing from people who use full groupby on BigData. I’m quite curious to hear how this is used in practice and how it performs.
## Creative Groupbys - 250 MB/s
Don’t use groupby. You can often work around it with cleverness. Our example above can be handled with streaming grouping reductions (see toolz docs.) This requires more thinking from the programmer but avoids the costly shuffle process.
In [66]: %time dict(db.from_filenames(path)
... .foldby('type', lambda total, d: total + 1, 0, lambda a, b: a + b))
Out[66]:
{'CommitCommentEvent': 17889,
'CreateEvent': 210516,
'DeleteEvent': 14534,
'FollowEvent': 35910,
'ForkEvent': 67939,
'GistEvent': 7344,
'GollumEvent': 31688,
'IssueCommentEvent': 163798,
'IssuesEvent': 102680,
'MemberEvent': 11664,
'PublicEvent': 1867,
'PullRequestEvent': 69080,
'PullRequestReviewCommentEvent': 17056,
'PushEvent': 960137,
'WatchEvent': 173631}
CPU times: user 322 ms, sys: 604 ms, total: 926 ms
Wall time: 13.2 s
In [67]: len(total) / 13.2 / 1e6 # MB/s
Out[67]: 239.16047181818183
We can also spell this with PySpark which performs about the same.
>>> dict(rdd.map(loads) # PySpark equivalent
... .keyBy(lambda d: d['type'])
... .combineByKey(lambda d: 1, lambda total, d: total + 1, lambda a, b: a + b)
... .collect())
## Use a Database
By the time you’re grouping or joining datasets you probably have structured data that could fit into a dataframe or database. You should transition from dynamic data structures (dicts/lists) to dataframes or databases as early as possible. DataFrames and databases compactly represent data in formats that don’t require serialization; this improves performance. Databases are also very clever about reducing communication.
Tools like pyspark, toolz, and dask.bag are great for initial cleanings of semi-structured data into a structured format but they’re relatively inefficient at complex analytics. For inconveniently large data you should consider a database as soon as possible. That could be some big-data-solution or often just Postgres.
## Better data structures for semi-structured data?
Dynamic data structures (dicts, lists) are overkill for semi-structured data. We don’t need or use their full power but we inherit all of their limitations (e.g. serialization costs.) Could we build something NumPy/Pandas-like that could handle the blob-of-JSON use-case? Probably.
DyND is one such project. DyND is a C++ project with Python bindings written by Mark Wiebe and Irwin Zaid and historically funded largely by Continuum and XData under the same banner as Blaze/Dask. It could probably handle the semi-structured data problem case if given a bit of love. It handles variable length arrays, text data, and missing values all with numpy-like semantics:
>>> from dynd import nd
>>> data = [{'name': 'Alice', # Semi-structured data
... 'location': {'city': 'LA', 'state': 'CA'},
... 'credits': [1, 2, 3]},
... {'name': 'Bob',
... 'credits': [4, 5],
... 'location': {'city': 'NYC', 'state': 'NY'}}]
>>> dtype = '''var * {name: string,
... location: {city: string,
... state: string[2]},
... credits: var * int}''' # Shape of our data
>>> x = nd.array(data, type=dtype) # Create DyND array
>>> x # Store compactly in memory
nd.array([["Alice", ["LA", "CA"], [1, 2, 3]],
["Bob", ["NYC", "NY"], [4, 5]]])
>>> x.location.city # Nested indexing
nd.array([ "LA", "NYC"],
type="strided * string")
>>> x.credits # Variable length data
nd.array([[1, 2, 3], [4, 5]],
type="strided * var * int32")
>>> x.credits * 10 # And computation
nd.array([[10, 20, 30], [40, 50]],
type="strided * var * int32")
Sadly DyND has functionality gaps which limit usability.
>>> -x.credits # Sadly incomplete :(
TypeError: bad operand type for unary -
I would like to see DyND mature to the point where it could robustly handle semi-structured data. I think that this would be a big win for productivity that would make projects like dask.bag and pyspark obsolete for a large class of use-cases. If you know Python, C++, and would like to help DyND grow I’m sure that Mark and Irwin would love the help
## Comparison with PySpark
1. Doesn’t engage the JVM, no heap errors or fiddly flags to set
2. Conda/pip installable. You could have it in less than twenty seconds from now.
3. Slightly faster in-memory implementations thanks to cytoolz; this isn’t important though
4. Good handling of lazy results per-partition
5. Faster / lighter weight start-up times
6. (Subjective) I find the API marginally cleaner
PySpark pros:
1. Supports distributed computation (this is obviously huge)
2. More mature, more filled out API
3. HDFS integration
Dask.bag reinvents a wheel; why bother?
1. Given the machinery inherited from dask.array and toolz, dask.bag is very cheap to build and maintain. It’s around 500 significant lines of code.
2. PySpark throws Python processes inside a JVM ecosystem which can cause some confusion among users and a performance hit. A task scheduling system in the native code ecosystem would be valuable.
3. Comparison and competition is healthy
4. I’ve been asked to make a distributed array. I suspect that distributed bag is a good first step.
## April 20, 2015
### Titus Brown
#### Statistics from applications to the 2015 course on NGS analysis
Here are some statistics from this year's applications to the NGS course. Briefly, this is a two-week workshop on sequence analysis at the command line and in the cloud.
The short version is that demand remains high; note that we admit only 24 applicants, so generally < 20%...
Year Number of applications Note
2010 33
2011 133
2012 170
2013 210
2014 170 (shifted the timing to Aug)
2015 155 (same timing as 2014)
The demand is still high, although maybe starting to dip?
Status: Number Percent
1st or 2nd year graduate student 20 12.6%
3rd year+ graduate student 40 25.2%
Post-doctoral researcher 36 22.6%
Non-tenure faculty or staff 20 12.6%
Tenure-line faculty 24 15.1%
Other 19 11.9%
Lots of tenure-line faculty feel they need this training...
Primary training/background: Number Percent
Bioinformatics 11 6.9%
Biology 112 70.4%
Computer Science 3 1.9%
Physics 0 0%
Other 33 20.8%
I should look into "Other"!
--titus
## April 19, 2015
### Titus Brown
#### Dear Doc Brown: how can I find a postdoc where I can do open science?
I got the following e-mail from a student I know -- lightly edited to protect the innocent:
I am at the stage where I am putting together a list of people that I want to post-doc with.
So, a question to you:
1. How can I find people who do open science?
2. Say I go for an interview, would it be "polite" to ask them to see if I can do open science even if they're not doing it themselves? Do you have any suggestions on how, exactly, to ask?
The reason why I am asking is because I rarely hear about openly doing (or even talking about) science in biomedical fields, outside of the standard communication methods (e.g. presenting at a meeting). Most of the people in my field seem somewhat conservative in this regard. Plus, I really don't want to butt heads with my postdoc mentor on this kind of topic.
Any advice? I have some but I'll save it for later so I can incorporate other people's advice ;).
thanks,
--titus
p.s. Yes, I have permission to post this question!
### Paul Ivanov
#### My first 200 K
Yesterday, I rode my longest bike ride to date - the El Cerrito-Davis 200K - with the San Francisco Randonneurs. A big thank you to all the volunteers and randos who made my first 200k so much fun.
First, for the uninitiated, an aside about randonneuring:
I discovered the sport because I'm a cheapskate. I had gotten more and more into cycling over the past 2 years or so, and though I was riding through the East Bay hills mostly alone, I wanted to do a "Century" - a 100 mile ride. Looking up local rides I found out that, while most centuries cost a nontrivial amount of money for the poor grad student I was back then ($60-$200), the San Francisco Randonneur rides were all $10-$20. As I dug deeper and learned about the sport, I found out that The reason for low cost, is that rando rides are unsupported - randonneuring is all about self sufficiency. You are expected to bring gear to fix your own flats, as well as carry or procure your own snacks and beverages.
The Randonneurs USA (RUSA) website succinctly summarizes the sport.
Randonneuring is long-distance unsupported endurance cycling. This style of riding is non-competitive in nature, and self-sufficiency is paramount. When riders participate in randonneuring events, they are part of a long tradition that goes back to the beginning of the sport of cycling in France and Italy. Friendly camaraderie, not competition, is the hallmark of randonneuring.
This description is strikingly similar to my beliefs both about cycling and computing, so I knew I found a new activity I would greatly enjoy. This has been borne out on three previous occasions when I have participated in the ~100 km Populaire rides, which are intended as a way to introduce riders to the sport, yet still be within reach of wide variety of cycling abilities.
We moved in late December, and between unpacking, rainy weekends, and being sick - I haven't been able to get much riding in. However, I've been wanting to do a 200K for a long time - my century, coveted for years, seemed within reach more than ever. To seal my commitment to the ride, I went ahead and ordered a spiffy looking SFR cycling jersey:
Here's the 213km (132 miles) ride map and elevation profile, what follows is my ride report.
## Start Control
The ride started at 8am at a Starbucks by El Cerrito BART station, so I just rode there from my house as I frequently do on my morning commute. I prepared my bike, packed my gear and food the night before, and the only thing I needed to do was to fill up my water bottle before the ride started. I showed up a dozen minutes before the start, with most riders already assembled, got my short drip and a heated up croissant, and totally failed to get any water. It's not a super terrible thing, I frequently don't end up drinking much at the beginning of my rides, but I had now set myself up to ride to the second control (44 km / 27 miles) without any water.
Randonneuring isn't about racing - as you traverse from control to control, there's just a window of time that you have for each control, and so long as you make it through each control withing that time window, you complete the event! There is no ordinal placement, the first rider to finish has just as much bragging rights as the last. The only time people talk about time is when they're trying to make a new personal best. The second control was open roughly between 9 and 11am, so I could have stopped off somewhere to pickup water, but that didn't fit into my ride plan.
You see, the cue sheet is two pages - with the first page dedicated to getting you to the second control, as there are a lot of turns to make in the East bay until you get to Benicia. Accordingly, my plan was to stick with the "fast" group who know the route well, so that I wouldn't have to look at the cue sheet at all. This was a success - and I got to chat with Jesse, whom I rode a fair chunk of the 111 km Lucas Valley Populaire back in October (here's a good video of the first chunk of that ride).
## Control #2
When 7 of us got to the Benicia control at 9:45, I volunteered to guard the bikes as folks filed into the gas station mart to buy something (getting a timestamped receipt is how you prove that you did the ride from one control to the next within the allotted time). Lesson learned: I should have used that time to shed my warm second jersey and long pants, as it had warmed up by then. It just so happens that by the time I went inside, there was a line for the bathroom, and by the time I got out, the fast group was just heading out, yet I still needed to change.
Because keeping up with the "fast" group wasn't that big of a deal and actually rather fun, I really wanted to try to catch up to them, so I stepped it up, and told a couple of other riders that I'd do my best to pull us to them (in case you didn't know - the aerodynamics of cycling make it much easier for those behind the leader to keep up the pace, even if that pace isn't something they could comfortably do on their own). We didn't succeed in catching them during the first 10 mile stretch of road, but I kept pushing with a high cadence in my highest gear, and about 5 miles later, we caught up with them! The only problem was, by this point, I had wasted so much energy, that I couldn't make the last 20 feet to them, though I was happy to see the two guys who'd been letting my push the whole time, use their fresher legs and join the group. So there I was, a few dozen feet from my intended riding partners, but as more and more pavement went past our wheels, the distance between us slowly widened.
Somewhat disheartened by this (though, again, happy that I all of my pulling wasn't all for naught, since I bridged two other to the group), and now overheating, I decide to stop by the side of the road, removed my long sleeved shirt, put on sunscreen (Lesson learned: don't forget your ears, too - they're the only part of my that burned). A fellow rando rider Eric went past,to my smiling cheer of "Go get 'em!". Then when pair of riders asked if I'm alright, I nodded, and decided to hop back on the bike and ride with them for a bit. It was nice to let someone else pull for a bit, but shortly thereafter, we started the first serious ascent, and my heart was pounding to hard from exhaustion and the heat - and I had to stop to again to catch my breath and get some more food in me.
Luckily, when I resumed riding up that hill, I had a mental shift, gave myself a break, and given how tired my legs were already, even though I wasn't even half-way through the ride, I reminded myself that I'll just spin in a low gear if I have to, there's no rush, I'm not racing anyone, and though I know this won't end up being as good of a 200k as it could have been had I trained more in recent months, it was still up to me to enjoy the ride. One of the highlights of the ride were all of the different butterflies I got to see along the way that I started noticing after this change in mental attitude.
After the long climb, followed by a very nice descent, I got onto the Silverado Trail for 14 miles of a straight road with minimal elevation changes. Though my legs were again cooperating more, it was starting to get kind of old, and then out of nowhere, one of the riders I had pulled earlier rides past me, but then proceeds to slow down and ride along side me for a chat. He hadn't been able to keep up with the fast group for very long, and ended up stopping somewhere along the way to eat, which is why I didn't notice that I had passed him. We took turns pulling for each other, which took away from the monotony of Silverado, but he had a lot more in the tank, and I again wasn't able to keep up, losing him with 4 miles to go to St Helena.
Needing another break, with 4 miles to the next control, I decided I would need to spend a while there, recuperating, if I am to make it through the rest of the ride.
## Control #3
I got to Model Bakery at 1:10pm, with many familiar faces from earlier in the day already enjoying their food, but ended up staying there until 2:30 - eating my food, drinking water, just letting my legs rest.
I ended up riding out solo, and really enjoyed the early parts of 128 (after missing the turnoff by a hundred feet) - luckily this was just the spot I had my last stop at, so I quickly turned around and got back on the road.
The problem was it kept getting hotter and hotter - it seemed that I couldn't go a mile without taking a good gulp of water. I still stuck to my strategy of just spinning fast without really pushing hard, since recovering from being out of breath is way faster than waiting for exhausted legs to obey your commands. I made it a good chunk of the way to Winters, but still ended up having to stop half way up the climb near Lake Berryessa. Another SFR rider, Julie, climbed past me, checking if I was OK as she went by. It's great to have that kind of camaraderie along the ride, a couple of people even gently expressed their concern that my rear wheel was out of true - which I knew but kept putting off getting it fixed. These nudges gave me more resolve to get that taken care of.
Finishing off the last of the Haribo Gummi Candy Gold-Bears that I brought with me (and you know you're tired and dehydrated when it takes effort to just chew), I got back on the bike and headed further up the hill. Then, finally, I didn't think I could ever be so cheered up by road sign (hint: they only put "TRUCKS USE LOWER GEAR" signs at the top of big hills).
## Control #4
I finally got to Winters at 5:37pm, got myself a Chai Smoothie, and Julie, who unfortunately was going to miss her train home, proposed that we ride together the rest of the way to Davis. Again - though I really enjoy my alone time while cycling, it's also quite fun to have strong riders to ride with, so as the sun started descending behind us, and no longer scorchingly hot, we set out for the final 17 miles to Davis at a good clip, given how much riding we had already done.
## Finish Control
Dodging drunk college kids (it was Picnic Day at UC Davis) was the last challenge of the ride. As a UCD alum, this was a homecoming of sorts, so I lead the way through town as we made our way to the Amtrak station. We finished just before 7, and I caught the 7:25pm train back to the Bay Area - enjoying the company of a handful of other randonneurs.
Thanks for reading my ride report!
_
/ \
A* \^ -
,./ _.\\ / \
/ ,--.S \/ \
/ "~,_ \ \
__o ?
_ \<,_ /:\
--(_)/-(_)----.../ | \
--------------.......J
## April 18, 2015
### Titus Brown
#### First thoughts on Galileo's Middle Finger
I'm reading Galileo's Middle Finger by Dr. Alice Dreger (@alicedreger), and it's fantastic. It's a paean to evidence-based popular discourse on scientific issues -- something I am passionate about -- and it's very well written.
I bought the book because I ran across Dr. Dreger's excellent and hilarious live-tweeting of her son's sex-ed class (see storify here), which reminded me that I'd first read about her in the article Reluctant Crusader, on "Why Alice Dreger's writing on sex and science makes liberals so angry." While I'm pretty liberal in outlook, I'm also a scientist by inclination and training, and I often see the kinds of conflicts that Dr. Dreger talks about (where what people want to be true is unsupported by evidence, or directly conflicts with evidence).
The book is chock full of examples where Dr. Dreger examines controversies in science. The common theme is that some plucky scientist or set of scientists publish some perspective that is well supported by their data, but that runs against some commonly held perspective (or at least some perspective that an activist group holds). Vested interests of some kind then follow with scurrilous public or academic attacks that take years or decades to figure out. Dr. Dreger spends much of the book (so far) exploring the "playbook" used by these attackers to smear, harass, and undermine the original researchers.
At this point, I think it's important to note that most of the controversies that Dr. Dreger discusses are not "settled" -- science rarely settles things quickly -- but that in all the cases, there appears to be strong empirical evidence to support the conclusions being published. What Dr. Dreger never argues is that the particular science she is discussing is settled; rather, she often argues that it's not settled, and that the attackers are trying to make it look like it is, as a way to shut down further investigation. The kind of "double negative" espoused by Dr. Dreger ("my research doesn't show that gravity works, it shows that there is no reason to believe that gravity doesn't work") is how I try to operate in my own research, and I have an awful lot of sympathy for this general strategy as I think it's how science should work.
There are a few things about Dr. Dreger's book that rub me the wrong way, and I may or may not blog about them in detail when I'm done with the book. (Two brief items: despite showing how easily peer review can be manipulated to support personal vendettas, she consistently uses "peer reviewed" as a label to put work that supports her own positions beyond doubt. Also, she's so impassioned about the issues that she comes off as wildly un-objective at times. I think she also downplays just how complicated a lot of the research she's examining is to do and understand.) Most of my concerns can be attributed to this being an advocacy book aimed at a popular audience, where careful and objective presentations of the underlying science need to be weighed against the audience, and in compensation for this Dr. Dreger provides plenty of footnotes and citations that let interested people follow up on specific items.
But! What I wanted to write about in this blog post is two things -- first, MONEY. And second, media, the Internet, and Internet harassment.
## First, MONEY.
I'm only about halfway through the book, but it's striking that Dr. Dreger has so far not talked about some really big issues like global warming, which is kind of a poster child for science denialism these days. All of the issues in the book have to do with controversies where relatively small amounts of money were on the line. Unlike global warming, or tobacco and cancer, "all" that is at stake in the controversies in the book is sociopolitical agendas and human identity - crucially, nothing that hits at a big industry's bottom line.
What Dr. Dreger points out, though, is that even in circumstances where money is not the main issue, it is very hard for evidence to even get a fair hearing. (Even discounting those research imbroglios that Dr. Dreger is herself involved in, she presents plenty of data that the way our society handles contentious situations is just broken. More on that below.) But it doesn't take much imagination to guess that when real money is on the line, the "plucky scientist" faces even more massive obstacles. We've seen exactly how this plays out in the case of tobacco and cancer, where it took decades for scientists and patient advocates to overcome the industry-funded nonsense.
The same thing seems to be happening in climate change policy. I'm minded of a comment on Twitter that I've since lost track of -- badly paraphrased, it goes something like this: "Giant oil companies would have us believe that scientists are the ones with the overwhelming conflict of interest in the global warming discussion. What?"
I like to think about these things in terms of Bayesian priors. Who am I more likely to believe in a disagreement? The interest group which has billions of dollars at stake, or the scientists who are at least trained in objective inquiry? It's almost insane to fail to take into account the money stakes. Moreover, there are plenty of indications that, even when wrong initially, scientists are self-correcting; but I've never seen an interest group go "oops, I guess we got that wrong, let's rethink." So I guess you know my position here :). But thinking of it in terms of priors, however strong they may be, means that I'm more alert to the possibility of counter-evidence.
Anyway, I'm not an expert on any of this; my area of expertise is confined to a few areas in genomics and computational biology at this point. So it's very hard for me to evaluate the details of some of Dr. Dreger's cases. But I think that's one of the points she's getting at in the book -- who do we trust when seemingly trustworthy academic societies get manipulated by activist agendas? How do we reach some sort of conclusion, if not broad consensus then at least academic consensus, on issues that are (at the least) not well understood yet? And (one place where Dr. Dreger excels) how do we evaluate and decide on the ethical standards to apply to biomedical (or other) research? All very relevant to many things going on today, and all very tricky.
## Media, and the Internet
At the point I'm at in the book, Dr. Dreger has undergone a transition from being press-driven to being Internet driven. She points out that in the recession, the big media companies essentially lost the staff to pursue deep, technically tricky stories; coincident, "mainstream media" lost a lot of trust as the Internet came along and helped enable audience fragmentation. This meant that she had to take her fights to the Internet to convince the masses (or at least the Google search engine) much more so than relying on deep investigation by arbiters of "official" social policy like the NY Times or New Yorker.
I think in many ways this is a massive improvement -- I hate the fact that, in science, we've got these same kinds of official arbiters -- but it's interesting to read through the book and recognize Dr. Dreger's shift in thinking and tactics. It's especially ironic given that the same kind of tactics she starts to use in the dexamethasone investigation are the ones used against her in some of her intersex work. But, again, this is kind of the point of her book - it's not enough to rely on someone's credentials and sense of justice to evaluate their message, you need to actually look into the evidence yourself.
It's particularly interesting to fold Dr. Dreger's observations about Internet harassment and shaming into my general body of knowledge about how the Internet is used to, well, harass, shame, derail, and otherwise sandbag particular messages and people. Which reminds me, I need to watch all of Dr. Gabriella Coleman's PyCon 2015 keynote on Anonymous...
## A Not-Really Conclusion
At the end of the day, I worry that the "trust no one, investigate yourself" message is too challenging for our culture to grasp in a productive way. And yet I'm firmly convinced it's the only way forward. How can we do this better as scientists and educators?
--titus
p.s. I'm thinking about instituting a commenting policy, perhaps one based on Captain Awkward's policy. Thoughts?
p.p.s. There's some sort of irony in me leaving Michigan State just as I discover that Dr. Dreger is local. I may try to track her down for coffee while I'm still in town, although I'm sure she's super busy...
## April 16, 2015
### Titus Brown
#### Please destroy this software after publication. kthxbye.
tl;dr? A while back I wrote that there are three uses of research software: replication, reproduction, and reuse. The world of computational science would be better off if people clearly delineated whether or not they wanted anyone else to reuse their software, and I think it's a massive mistake to expect that everyone's software should be reusable.
A few months back, I reviewed a pretty exciting paper - one I will probably highlight on my blog, when it comes out. The paper outlined a fairly simple concept for comparing sequences and then used that to develop some new ultra-scalable functionality. The theory seemed novel, the computational results were pretty good, and I recommended acceptance (or minor revisions). This was in spite of the fact that the authors stated quite clearly that they had produced largely unusable software.
Other reviewers were not quite so forgiving, however -- one reviewer declined to review the paper until they could run the software on their own data.
This got me thinking - I think I have a reputation as wanting people to release their software in a useful manner, but I've actually shied away from requiring it on several occasions. Here was a situation that was a pretty direct conflict: neat result, with software that was not intended for reuse. Interestingly, I've drawn this line before, although without much personal comment. In my blog post on review criteria for bioinformatics papers, there's nothing there about whether or not the software is reusable - it must just be legally readable and executable. But I'm also pretty loud-mouthed about wanting good quality (or at least better quality) software out there in the bioinformatics world!
So what gives? I felt that the new theory looked pretty awesome, and would be tremendously useful, while the implementation was (as stated) unlikely to be something I (or others) used. So what? Publish!
I think this highlights that there are two different possible goals for bioinformatics papers. One goal is the standard scientific goal: to demonstrate a new method or technique, whether it be mathematical or computational. The other goal is different, and in some ways much harder: to provide a functioning tool for use and reuse. These should have different review standards, and that maybe the authors should be given the opportunity to distinguish clearly between the two goals.
There's actually a lot of commonality between what I would request of the software from either kind of paper, a technique paper or a tool paper.
• Both need to be accessible for download and viewing - otherwise, how can I understand the details of the implementation?
• Both types of software need to be usable enough to reproduce the results in the paper, in theory (e.g. given sufficient access to compute infrastructure).
• Both should be in a publicly accessible and archived format, to avoid loss of the software from personal Web sites, etc.
• Both should show evidence of decent principles of basic software engineering, including the use of version control, some form of testing (albeit unit testing or functional testing or even just defined input with known good output), release/version information, installation/dependency information, and the like.
However, there are some interesting differences. Off the top of my head, I'm thinking that:
• Crucially, the software from the technique paper would not need to be open source - by the OSI definition, the technique code would not need to be freely modifiable or re-sharable.
(To be clear, I know of neither any formal (journal) requirements nor ethical requirements that a particular implementation be modifiable or redistributable.)
• Nor need the software from the technique paper be written in a general way (e.g. to robustly process different formats), or for broader re-use. In particular, this means that documentation and unit/functional tests might be minimal - enough to support replication but no more.
• The software from the technique paper should be accessible to basic review, but should not be subject to code review on style or implementation - correctness only.
• Software from a "tools" paper, by contrast, should be held to much higher standards, and be subject to code review (in parts) and examination for documentation and installation and ... oh, heck, just start with the sustainability evaluation checklist at the SSI!
I'm aware that by having such relaxed constraints on technique publication I'm more or less directly contradicting myself in my earlier blog post on automated testing and research software - all of that definitely holds for software that you hope to be reused.
I'm not sure how or where to draw the line here, exactly. It's certainly reasonable to say that software that doesn't have unit tests is likely to be wrong, and therefore unit tests should be required - but, in science, we should never rely on a single study to prove something anyway, so I'm not sure why it matters if software is wrong in some details. This is where the difference between "replicability" and "reproducibility" becomes important. If I can't replicate your computation (at least in theory) then you have no business publishing it; but reproducing it is something that is a much larger task, outside the scope of any given paper.
I want to quote David States, who wrote a comment two years ago on my blog:
Too often, developers work in isolation, and this creates a high risk for code errors and science errors. Good code needs to be accessible and this includes not just sharing of the source code itself but also use of effective style, inclusion of tests and validation procedures and appropriate documentation.
I think I agree - but what's the minimum, here, for a technique paper that is meant to be a demonstration of a technique and no more?
One final point: in all of this we should recognize that the current situation is quite poor, in that quite a bit of software is simply inaccessible for replication purposes. (This mirrors my personal experiences in bioinformatics review, too.)
Improving this situation is important, but I think we need to be precise about what the minimim is. I don't think we're going to get very far by insisting that all code be held to high standards; that's a generational exercise (and part of why I'm so bullish on Software Carpentry).
So: what's the minimum necessary for decent science?
--titus
p.s. In case anyone is wondering, I don't think our software really meets my own criteria for tool publication, although it's getting closer.
p.p.s. Drawing this distinction leads in some very good directions for publishers and funding bodies to think about, too. More on that in another blog post, if I get the chance.
p.p.p.s. My 2004 paper (Brown and Callan) has a table that's wrong due to a fencepost error. But it's not seriously wrong. shrug
#### The PyCon 2015 Ally's Workshop
At PyCon 2015, I had the pleasure of attending the Ally Skills Workshop, organized by @adainitiative (named after Ada Lovelace).
The workshop was a 3 hour strongly guided discussion centering around 4-6 person group discussion of short scenarios. There's a guide to running them here, although I personally would not have wanted to run one without attending one first!
I attended the workshop for at least three reasons --
First, I want to do better myself. I have put some effort into (and received a lot of encouragement for) making my lab an increasingly open and welcoming place. While I have heard concerns about being insufficiently critical and challenging of bad ideas in science (and I have personally experienced a few rather odd situations where obviously bad ideas weren't called out in my past labs), I don't see any inherent conflict between being welcoming and being intellectually critical - in fact, I rather suspect they are mutually supportive, especially for the more junior people.
But, doing better is surprisingly challenging; everyone needs a mentor, or at least guideposts. So when I heard about this workshop, I leapt at the chance to attend!
Second, I am interested in connecting these kinds of things to my day job in academia, where I am now a professor at UC Davis. UC Davis is the home of the somewhat notorious Jonathan Eisen, who is notorious for many reasons that include boycotting and calling out conferences that have low diversity. UC Davis also has an effort to increase diversity at the faculty level, and I think that this is an important effort. I'm hoping to be involved in this when I actually take up residence in Davis, and learning to be a male ally is one way to help. More, I think that Davis would be a natural home to some of these ally workshops, and so I attended the Ally Skills workshop to explore this.
And third, I was just curious! It's surprisingly tricky to confront and talk about sexism effectively, and I thought seeing how the the pros did it would a good way to start.
Interestingly, 2/3 of my lab attended the workshop, too - without me requesting it. I think they found it valuable, too.
## The workshop itself
Valerie Aurora ran the workshop, and it's impossible to convey how good it was, but I'll try by picking out some choice quotes:
"You shouldn't expect praise or credit for behaving like a decent human being."
"Sometimes, you just need a flame war to happen." (paraphrase)
"It's not up to the victim whether you enforce your code of conduct."
"The physiological effects of alcohol are actually limited, and most effects of alcohol are socially and/or culturally mediated."
"Avoid rules lawyering. I don't now if you've ever worked with lawyers, but software engineers are almost as bad."
"One problem for male allies is the assumption that you are only talking to a woman because you are sexually interested in them."
"Trolls are good at calibrating their level of awfulness to something that you will feel guilty about moderating."
Read the blog post "Tone policing only goes one way.
Overall, a great experience and something I hope to help host more of at UC Davis.
--titus
### Continuum Analytics
#### Find Continuum at PyData Dallas
PyData Dallas, the first PyData conference in Texas, is taking place next week, April 24-26. PyData has been a wonderful conference for fostering the Python community and giving developers and other Python enthusiasts the opportunity to share their ideas, projects and the future of Python. Continuum Analytics is proud to be a founding sponsor for such an innovative, community-driven conference.
## April 15, 2015
### Titus Brown
#### The three porridge bowls of sustainable scientific software development
(The below issues are very much on my mind as I think about how to apply for another NIH grant to fund continued development on the khmer project.)
Imagine that we have a graph of novel functionality versus software engineering effort for a particular project, cast in the shape of a tower or pyramid, i.e. a support structure for cool science.
The more novel functionality implemented, the taller the building, and the broader the software engineering base needs to be to support the building. If you have too much novel functionality with too little software engineering base, the tower will have too little support and catastrophe can ensue - either no new functionality can be added past a certain point, or we discover that much of the implemented functionality is actually unstable and incorrect.
Since everybody likes novel functionality - for example, it's how we grade grants in science -- this is a very common failure mode. It is particularly problematic in situations where we have built a larger structure by placing many of these individual buildings on top of others; the entire structure is not much stronger than its weakest (least supported) component.
Another possible failure mode is if the base becomes too big too soon:
That is, if too much effort is spent on software engineering at the expense of building novel functionality on top of it, then the building remains the same height while the base broadens. This is a failure for an individual project, because no new functionality gets built, and the project falls out of funding.
In the worst case, the base can become over-wrought and be designed to support functionality that doesn't yet exist. In most situations, this work will be entirely wasted, either because the base was designed for the wrong functionality, or because the extra work put into the base will delay the work put into new features.
Where projects are designed to be building blocks from the start, as opposed to a leap into the unknown like most small-lab computational science projects, a different structure is worth investing in -- but I'm skeptical that this is ever the way to start a project.
Supporting this kind of project is something that Dan Katz has written and presented about; see (for example) A Method to Select e-Infrastructure Components to Sustain.
And, of course, the real danger is that we end up in a situation where a poorly engineered structure is used to support a much larger body of scientific work:
The question that I am trying to understand is this: what are the lifecycle stages for research software, and how should we design for them (as researchers), and how should we think about funding them (as reviewers and program officers)?
To bring things back to the title, how do we make sure we mix the right amount of software development (cold porridge) with novel functionality (hot porridge) to make something edible for little bears?
--titus
### Matthieu Brucher
#### Announcement: ATKStereoCompressor 1.0.0
I’m happy to announce the release of a stereo compressor based on the Audio Toolkit. It is available on Windows and OS X (min. 10.8) in different formats. This stereo compressor can work on two channels, left/right or middle/side, possibly in linked mode (only one set of parameters), and can be set up to mix the input signal with the compressed signal (serial/parallel compression).
The supported formats are:
• VST2 (32bits/64bits on Windows, 64bits on OS X)
• VST3 (32bits/64bits on Windows, 64bits on OS X)
• Audio Unit (64bits, OS X)
The files as well as the previous plugins can be downloaded on SourceForge, as well as the source code.
ATK SD1, ATKCompressor and ATKUniversalDelay were upgraded after AU validation failed. This is now fixed.
## April 14, 2015
### Jan Hendrik Metzen
#### Probability calibration
As a follow-up of my previous post on reliability diagrams, I have worked jointly with Alexandre Gramfort, Mathieu Blondel and Balazs Kegl (with reviews by the whole team, in particular Olivier Grisel) on adding probability calibration and reliability diagrams to scikit-learn. Those have been added in the recent 0.16 release of scikit-learn as CalibratedClassifierCV and calibration_curve.
This post contains an interactive version of the documentation in the form of an IPython notebook; parts of the text/code are thus due to my coauthors.
Note that the 0.16 release of scikit-learn contains a bug in IsotonicRegression, which has been fixed in the 0.16.1 release. For obtaining correct results with this notebook, you need to use 0.16.1 or any later version.
## Reliability curves
In [1]:
Expand Code
import numpy as np
np.random.seed(0)
import matplotlib
matplotlib.use("svg")
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve, CalibratedClassifierCV
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score, log_loss)
from sklearn.cross_validation import train_test_split
When performing classification you often want not only to predict the class label, but also obtain a probability of the respective label. This probability gives you some kind of confidence on the prediction. Some models can give you poor estimates of the class probabilities and some even do not not support probability prediction. The calibration module allows you to better calibrate the probabilities of a given model, or to add support for probability prediction.
Well calibrated classifiers are probabilistic classifiers for which the output of the predict_proba method can be directly interpreted as a confidence level. For instance, a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a predict_proba value close to 0.8, approximately 80% actually belong to the positive class. The following plot compares how well the probabilistic predictions of different classifiers are calibrated:
In [2]:
Expand Code
X, y = datasets.make_classification(n_samples=100000, n_features=20,
n_informative=2, n_redundant=2)
train_samples = 100 # Samples used for training the models
X_train = X[:train_samples]
X_test = X[train_samples:]
y_train = y[:train_samples]
y_test = y[train_samples:]
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = LinearSVC(C=1.0)
rfc = RandomForestClassifier(n_estimators=100)
In [3]:
Expand Code
plt.figure(figsize=(9, 9))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(svc, 'Support Vector Classification'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
LogisticRegression returns well calibrated predictions by default as it directly optimizes log-loss. In contrast, the other methods return biased probabilities; with different biases per method:
• Naive Bayes (GaussianNB) tends to push probabilties to 0 or 1 (note the counts in the histograms). This is mainly because it makes the assumption that features are conditionally independent given the class, which is not the case in this dataset which contains 2 redundant features.
• RandomForestClassifier shows the opposite behavior: the histograms show peaks at approximately 0.2 and 0.9 probability, while probabilities close to 0 or 1 are very rare. An explanation for this is given by Niculescu-Mizil and Caruana [4]: "Methods such as bagging and random forests that average predictions from a base set of models can have difficulty making predictions near 0 and 1 because variance in the underlying base models will bias predictions that should be near zero or one away from these values. Because predictions are restricted to the interval [0,1], errors caused by variance tend to be one-sided near zero and one. For example, if a model should predict p = 0 for a case, the only way bagging can achieve this is if all bagged trees predict zero. If we add noise to the trees that bagging is averaging over, this noise will cause some trees to predict values larger than 0 for this case, thus moving the average prediction of the bagged ensemble away from 0. We observe this effect most strongly with random forests because the base-level trees trained with random forests have relatively high variance due to feature subseting." As a result, the calibration curve shows a characteristic sigmoid shape, indicating that the classifier could trust its "intuition" more and return probabilties closer to 0 or 1 typically.
• Linear Support Vector Classification (LinearSVC) shows an even more sigmoid curve as the RandomForestClassifier, which is typical for maximum-margin methods (compare Niculescu-Mizil and Caruana [4]), which focus on hard samples that are close to the decision boundary (the support vectors).
## Calibration of binary classifiers¶
Two approaches for performing calibration of probabilistic predictions are provided: a parametric approach based on Platt's sigmoid model and a non-parametric approach based on isotonic regression (sklearn.isotonic). Probability calibration should be done on new data not used for model fitting. The class CalibratedClassifierCV uses a cross-validation generator and estimates for each split the model parameter on the train samples and the calibration of the test samples. The probabilities predicted for the folds are then averaged. Already fitted classifiers can be calibrated by CalibratedClassifierCV via the paramter cv="prefit". In this case, the user has to take care manually that data for model fitting and calibration are disjoint.
The following images demonstrate the benefit of probability calibration. The first image present a dataset with 2 classes and 3 blobs of data. The blob in the middle contains random samples of each class. The probability for the samples in this blob should be 0.5.
In [4]:
Expand Code
n_samples = 50000
n_bins = 3 # use 3 bins for calibration_curve as we have 3 clusters here
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
centers = [(-5, -5), (0, 0), (5, 5)]
X, y = datasets.make_blobs(n_samples=n_samples, n_features=2, cluster_std=1.0,
centers=centers, shuffle=False, random_state=42)
y[:n_samples // 2] = 0
y[n_samples // 2:] = 1
sample_weight = np.random.RandomState(42).rand(y.shape[0])
# split train, test for calibration
X_train, X_test, y_train, y_test, sw_train, sw_test = \
train_test_split(X, y, sample_weight, test_size=0.9, random_state=42)
plt.figure()
y_unique = np.unique(y)
colors = cm.rainbow(np.linspace(0.0, 1.0, y_unique.size))
for this_y, color in zip(y_unique, colors):
this_X = X_train[y_train == this_y]
this_sw = sw_train[y_train == this_y]
plt.scatter(this_X[:, 0], this_X[:, 1], s=this_sw * 50, c=color, alpha=0.5,
label="Class %s" % this_y)
plt.legend(loc="best")
plt.title("Data")
Out[4]:
<matplotlib.text.Text at 0x5b37b10>
The following image shows on the data above the estimated probability using a Gaussian naive Bayes classifier without calibration, with a sigmoid calibration and with a non-parametric isotonic calibration. One can observe that the non-parametric model provides the most accurate probability estimates for samples in the middle, i.e., 0.5.
In [5]:
Expand Code
# Gaussian Naive-Bayes with no calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_isotonic.fit(X_train, y_train, sw_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with sigmoid calibration
clf_sigmoid = CalibratedClassifierCV(clf, cv=2, method='sigmoid')
clf_sigmoid.fit(X_train, y_train, sw_train)
prob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]
print("Brier scores: (the smaller the better)")
clf_score = brier_score_loss(y_test, prob_pos_clf, sw_test)
print("No calibration: %1.3f" % clf_score)
clf_isotonic_score = brier_score_loss(y_test, prob_pos_isotonic, sw_test)
print("With isotonic calibration: %1.3f" % clf_isotonic_score)
clf_sigmoid_score = brier_score_loss(y_test, prob_pos_sigmoid, sw_test)
print("With sigmoid calibration: %1.3f" % clf_sigmoid_score)
Brier scores: (the smaller the better)
No calibration: 0.104
With isotonic calibration: 0.084
With sigmoid calibration: 0.109
In [6]:
Expand Code
plt.figure()
order = np.lexsort((prob_pos_clf, ))
plt.plot(prob_pos_clf[order], 'r', label='No calibration (%1.3f)' % clf_score)
plt.plot(prob_pos_isotonic[order], 'g', linewidth=3,
label='Isotonic calibration (%1.3f)' % clf_isotonic_score)
plt.plot(prob_pos_sigmoid[order], 'b', linewidth=3,
label='Sigmoid calibration (%1.3f)' % clf_sigmoid_score)
plt.plot(np.linspace(0, y_test.size, 51)[1::2],
y_test[order].reshape(25, -1).mean(1),
'k', linewidth=3, label=r'Empirical')
plt.ylim([-0.05, 1.05])
plt.xlabel("Instances sorted according to predicted probability "
"(uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
Out[6]:
<matplotlib.text.Text at 0x623ce90>
The following experiment is performed on an artificial dataset for binary classification with 100.000 samples (1.000 of them are used for model fitting) with 20 features. Of the 20 features, only 2 are informative and 10 are redundant. The figure shows the estimated probabilities obtained with logistic regression, a linear support-vector classifier (SVC), and linear SVC with both isotonic calibration and sigmoid calibration. The calibration performance is evaluated with Brier score brier_score_loss, reported in the legend (the smaller the better).
In [7]:
Expand Code
# Create dataset of classification task with many redundant and few
# informative features
X, y = datasets.make_classification(n_samples=100000, n_features=20,
n_informative=2, n_redundant=10,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.99,
random_state=42)
In [8]:
Expand Code
def plot_calibration_curve(est, name, fig_index):
"""Plot calibration curve for est w/o and with calibration. """
# Calibrated with isotonic calibration
isotonic = CalibratedClassifierCV(est, cv=2, method='isotonic')
# Calibrated with sigmoid calibration
sigmoid = CalibratedClassifierCV(est, cv=2, method='sigmoid')
# Logistic regression with no calibration as baseline
lr = LogisticRegression(C=1., solver='lbfgs')
fig = plt.figure(fig_index, figsize=(9, 9))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(est, name),
(isotonic, name + ' + Isotonic'),
(sigmoid, name + ' + Sigmoid')]:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
clf_score = brier_score_loss(y_test, prob_pos, pos_label=y.max())
print("%s:" % name)
print("\tBrier: %1.3f" % (clf_score))
print("\tPrecision: %1.3f" % precision_score(y_test, y_pred))
print("\tRecall: %1.3f" % recall_score(y_test, y_pred))
print("\tF1: %1.3f\n" % f1_score(y_test, y_pred))
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f)" % (name, clf_score))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
In [9]:
# Plot calibration cuve for Linear SVC
plot_calibration_curve(LinearSVC(), "SVC", 2)
Logistic:
Brier: 0.099
Precision: 0.872
Recall: 0.851
F1: 0.862
SVC:
Brier: 0.163
Precision: 0.872
Recall: 0.852
F1: 0.862
SVC + Isotonic:
Brier: 0.100
Precision: 0.853
Recall: 0.878
F1: 0.865
SVC + Sigmoid:
Brier: 0.099
Precision: 0.874
Recall: 0.849
F1: 0.861
One can observe here that logistic regression is well calibrated as its curve is nearly diagonal. Linear SVC's calibration curve has a sigmoid curve, which is typical for an under-confident classifier. In the case of LinearSVC, this is caused by the margin property of the hinge loss, which lets the model focus on hard samples that are close to the decision boundary (the support vectors). Both kinds of calibration can fix this issue and yield nearly identical results. The next figure shows the calibration curve of Gaussian naive Bayes on the same data, with both kinds of calibration and also without calibration.
In [10]:
# Plot calibration cuve for Gaussian Naive Bayes
plot_calibration_curve(GaussianNB(), "Naive Bayes", 1)
Logistic:
Brier: 0.099
Precision: 0.872
Recall: 0.851
F1: 0.862
Naive Bayes:
Brier: 0.118
Precision: 0.857
Recall: 0.876
F1: 0.867
Naive Bayes + Isotonic:
Brier: 0.098
Precision: 0.883
Recall: 0.836
F1: 0.859
Naive Bayes + Sigmoid:
Brier: 0.109
Precision: 0.861
Recall: 0.871
F1: 0.866
One can see that Gaussian naive Bayes performs very badly but does so in an other way than linear SVC: While linear SVC exhibited a sigmoid calibration curve, Gaussian naive Bayes' calibration curve has a transposed-sigmoid shape. This is typical for an over-confident classifier. In this case, the classifier's overconfidence is caused by the redundant features which violate the naive Bayes assumption of feature-independence.
Calibration of the probabilities of Gaussian naive Bayes with isotonic regression can fix this issue as can be seen from the nearly diagonal calibration curve. Sigmoid calibration also improves the brier score slightly, albeit not as strongly as the non-parametric isotonic calibration. This is an intrinsic limitation of sigmoid calibration, whose parametric form assumes a sigmoid rather than a transposed-sigmoid curve. The non-parametric isotonic calibration model, however, makes no such strong assumptions and can deal with either shape, provided that there is sufficient calibration data. In general, sigmoid calibration is preferable if the calibration curve is sigmoid and when there is few calibration data while isotonic calibration is preferable for non- sigmoid calibration curves and in situations where many additional data can be used for calibration.
## Multi-class classification¶
CalibratedClassifierCV can also deal with classification tasks that involve more than two classes if the base estimator can do so. In this case, the classifier is calibrated first for each class separately in an one-vs-rest fashion. When predicting probabilities for unseen data, the calibrated probabilities for each class are predicted separately. As those probabilities do not necessarily sum to one, a postprocessing is performed to normalize them.
The next image illustrates how sigmoid calibration changes predicted probabilities for a 3-class classification problem. Illustrated is the standard 2-simplex, where the three corners correspond to the three classes. Arrows point from the probability vectors predicted by an uncalibrated classifier to the probability vectors predicted by the same classifier after sigmoid calibration on a hold-out validation set. Colors indicate the true class of an instance (red: class 1, green: class 2, blue: class 3).
In [11]:
Expand Code
np.random.seed(0)
# Generate data
X, y = datasets.make_blobs(n_samples=1000, n_features=2, random_state=42,
cluster_std=5.0)
X_train, y_train = X[:600], y[:600]
X_valid, y_valid = X[600:800], y[600:800]
X_train_valid, y_train_valid = X[:800], y[:800]
X_test, y_test = X[800:], y[800:]
In [12]:
Expand Code
# Train uncalibrated random forest classifier on whole train and validation
# data and evaluate on test data
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train_valid, y_train_valid)
clf_probs = clf.predict_proba(X_test)
score = log_loss(y_test, clf_probs)
# Train random forest classifier, calibrate on validation data and evaluate
# on test data
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid", cv="prefit")
sig_clf.fit(X_valid, y_valid)
sig_clf_probs = sig_clf.predict_proba(X_test)
sig_score = log_loss(y_test, sig_clf_probs)
In [13]:
Expand Code
# Plot changes in predicted probabilities via arrows
plt.figure(0, figsize=(10, 8))
colors = ["r", "g", "b"]
for i in range(clf_probs.shape[0]):
plt.arrow(clf_probs[i, 0], clf_probs[i, 1],
sig_clf_probs[i, 0] - clf_probs[i, 0],
sig_clf_probs[i, 1] - clf_probs[i, 1],
# Plot perfect predictions
plt.plot([1.0], [0.0], 'ro', ms=20, label="Class 1")
plt.plot([0.0], [1.0], 'go', ms=20, label="Class 2")
plt.plot([0.0], [0.0], 'bo', ms=20, label="Class 3")
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 'k', label="Simplex")
# Annotate points on the simplex
plt.annotate(r'($\frac{1}{3}$, $\frac{1}{3}$, $\frac{1}{3}$)',
xy=(1.0/3, 1.0/3), xytext=(1.0/3, .23), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.plot([1.0/3], [1.0/3], 'ko', ms=5)
plt.annotate(r'($\frac{1}{2}$, $0$, $\frac{1}{2}$)',
xy=(.5, .0), xytext=(.5, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $\frac{1}{2}$, $\frac{1}{2}$)',
xy=(.0, .5), xytext=(.1, .5), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($\frac{1}{2}$, $\frac{1}{2}$, $0$)',
xy=(.5, .5), xytext=(.6, .6), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $0$, $1$)',
xy=(0, 0), xytext=(.1, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($1$, $0$, $0$)',
xy=(1, 0), xytext=(1, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $1$, $0$)',
xy=(0, 1), xytext=(.1, 1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.grid("off")
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], 'k', alpha=0.2)
plt.plot([0, 0 + (1-x)/2], [x, x + (1-x)/2], 'k', alpha=0.2)
plt.plot([x, x + (1-x)/2], [0, 0 + (1-x)/2], 'k', alpha=0.2)
plt.title("Change of predicted probabilities after sigmoid calibration")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.legend(loc="best")
print("Log-loss of")
print(" * uncalibrated classifier trained on 800 datapoints: %.3f "
% score)
print(" * classifier trained on 600 datapoints and calibrated on "
"200 datapoint: %.3f" % sig_score)
Log-loss of
* uncalibrated classifier trained on 800 datapoints: 1.280
* classifier trained on 600 datapoints and calibrated on 200 datapoint: 0.536
The base classifier is a random forest classifier with 25 base estimators (trees). If this classifier is trained on all 800 training datapoints, it is overly confident in its predictions and thus incurs a large log-loss. Calibrating an identical classifier, which was trained on 600 datapoints, with method='sigmoid' on the remaining 200 datapoints reduces the confidence of the predictions, i.e., moves the probability vectors from the edges of the simplex towards the center:
In [14]:
Expand Code
# Illustrate calibrator
plt.figure(1, figsize=(10, 8))
# generate grid over 2-simplex
p1d = np.linspace(0, 1, 20)
p0, p1 = np.meshgrid(p1d, p1d)
p2 = 1 - p0 - p1
p = np.c_[p0.ravel(), p1.ravel(), p2.ravel()]
p = p[p[:, 2] >= 0]
calibrated_classifier = sig_clf.calibrated_classifiers_[0]
prediction = np.vstack([calibrator.predict(this_p)
for calibrator, this_p in
zip(calibrated_classifier.calibrators_, p.T)]).T
prediction /= prediction.sum(axis=1)[:, None]
# Ploit modifications of calibrator
for i in range(prediction.shape[0]):
plt.arrow(p[i, 0], p[i, 1],
prediction[i, 0] - p[i, 0], prediction[i, 1] - p[i, 1],
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 'k', label="Simplex")
plt.grid("off")
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], 'k', alpha=0.2)
plt.plot([0, 0 + (1-x)/2], [x, x + (1-x)/2], 'k', alpha=0.2)
plt.plot([x, x + (1-x)/2], [0, 0 + (1-x)/2], 'k', alpha=0.2)
plt.title("Illustration of sigmoid calibrator")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
Out[14]:
(-0.05, 1.05) |
proofpile-shard-0030-82 | {
"provenance": "003.jsonl.gz:83"
} | # Let S be the set of all real numbers and let
Question:
Let $\mathrm{S}$ be the set of all real numbers and let
$\mathrm{R}=\{(\mathrm{a}, \mathrm{b}): \mathrm{a}, \mathrm{b} \in \mathrm{S}$ and $\mathrm{a}=\pm \mathrm{b}\}$
Show that $R$ is an equivalence relation on $S$.
Solution:
In order to show $R$ is an equivalence relation we need to show $R$ is Reflexive, Symmetric and Transitive.
Given that, $\forall a, b \in S, R=\{(a, b): a=\pm b\}$
Now,
$\underline{R}$ is Reflexive if $(a, a) \in \underline{R} \underline{\forall} \underline{a} \in \underline{S}$
For any $a \in S$, we have
$a=\pm a$
$\Rightarrow(a, a) \in R$
Thus, $R$ is reflexive.
$\underline{R}$ is Symmetric if $(a, b) \in \underline{R} \Rightarrow(b, a) \in \underline{R} \forall \underline{a}, b \in \underline{S}$
$(a, b) \in R$
$\Rightarrow a=\pm b$
$\Rightarrow b=\pm a$
$\Rightarrow(b, a) \in R$
Thus, $R$ is symmetric.
$\underline{R}$ is Transitive if $(a, b) \in \underline{R}$ and $(b, c) \in \underline{R} \Rightarrow(a, c) \in \underline{R} \underline{\forall} a, b, c \in \underline{S}$
Let $(a, b) \in R$ and $(b, c) \in R \forall a, b, c \in S$
$\Rightarrow \mathrm{a}=\pm \mathrm{b}$ and $\mathrm{b}=\pm \mathrm{c}$
$\Rightarrow \mathrm{a}=\pm \mathrm{c}$
$\Rightarrow(\mathrm{a}, \mathrm{c}) \in \mathrm{R}$
Thus, $R$ is transitive.
Hence, $R$ is an equivalence relation. |
proofpile-shard-0030-83 | {
"provenance": "003.jsonl.gz:84"
} | [Jonathan Castello #27 wrote:](https://forum.azimuthproject.org/discussion/comment/17868/#Comment_17868)
> Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \$$\text{EXPSPACE} \subseteq \text{NP}\$$? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me.
As I mentioned above, the [Cardoza, Lipton and Meyer (1976)](https://dl.acm.org/citation.cfm?id=803630) establish that reachability for *symmetric* Petri nets is \$$\textsf{EXPSPACE}\$$-complete.
I didn't know this when I wrote my argument yesterday, I had to look it up.
If we let \$$\textsf{PETRI-REACH}\$$ be the class of problems reducible to Petri net reachability, then \$$\textsf{EXPSPACE} \subseteq \textsf{PETRI-REACH}\$$
> Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \$$\text{P} \ne \text{NP}\$$.
This is good, but we can do better I believe.
Not only does \$$\textsf{PETRI-REACH} \subseteq \textsf{NP} \implies \textsf{NP} \neq \textsf{P}\$$, but in fact we have the stronger result:
$$\textsf{NP} \subsetneq \textsf{PETRI-REACH}$$
**Proof.**
It's well known that \$$\textsf{NP} \subseteq \textsf{PSPACE}\$$ (see, for instance [Arora and Barak (2007), §4.2, pg. 78](http://theory.cs.princeton.edu/complexity/book.pdf)).
We also know that \$$\mathsf{PSPACE} \subsetneq \mathsf{EXPSPACE}\$$ from the [space hierarchy separation theorem](https://en.wikipedia.org/wiki/Space_hierarchy_theorem).
Finally we have \$$\mathsf{EXPSPACE} \subseteq \textsf{PETRI-REACH}\$$ by [Cardoza et al. (1976)](https://dl.acm.org/citation.cfm?id=803630).
Hence \$$\textsf{NP} \subsetneq \textsf{PETRI-REACH}\$$.
\$$\Box\$$ |
proofpile-shard-0030-84 | {
"provenance": "003.jsonl.gz:85"
} | ## Introduction
This is a companion to the Wikiversity article on “Time to extinction of civilization”. That article assumes the 1962 Cuban Missile Crisis and the 1983 Soviet nuclear false alarm incident provide one observation on the time between major nuclear crises, with a second time between such crises being censored at the present.
What can we say about the distribution of the time between such major nuclear crises?
With one observed time and a second censored, we can construct a likelihood function, which we can then use to estimate the mean time between such crises and the uncertainty in that estimate. With further estimates of the probability that such a crisis would lead to a nuclear war and nuclear winter, we can simulate such times and obtain plausible bounds on uncertainty in our estimates.
This methodology could later be expanded to consider a larger list of nuclear crises with a broader range of probabilities for each crisis escalating to a nuclear war and winter. The fact that no such nuclear war has occurred as of this writing puts an upper limit on such probabilities. A rough lower limit can be estimated from comments from people like Robert McNamara and Daniel Ellsberg, both of whom have said that as long as there are large nuclear arsenals on earth, it is only a matter of time before a nuclear crises escalates to such a nuclear Armageddon. McNamara was US Secretary of Defense during the 1962 Cuban Missile Crisis, and Ellsberg as a leading nuclear war planner advising McNamara and the rest of President Kennedy’s team during that crisis. For more on this, see the companion Wikiversity article on “Time to extinction of civilization”.
We start by being explicit about the observed and censored times between major nuclear crises.
## Times of major nuclear crises
str(eventDates <- c(as.Date(
c('1962-10-16', '1983-09-26')), Sys.Date()))
## Date[1:3], format: "1962-10-16" "1983-09-26" "2021-06-29"
(daysBetween <- difftime(tail(eventDates, -1),
head(eventDates, -1), units='days'))
## Time differences in days
## [1] 7650 13791
(yearsBetween <- as.numeric(daysBetween)/365.24)
## [1] 20.94513 37.75873
names(yearsBetween) <- c('observed', 'censored')
str(yearsBetween)
## Named num [1:2] 20.9 37.8
## - attr(*, "names")= chr [1:2] "observed" "censored"
## Likelihood of times between major nuclear crises
Appendix 1 of that Wikiversity article provides the following likelihood assuming we observe times between major nuclear crises, $$T_1, ..., T_{k-1},$$ plus one censoring time, $$T_k$$, and the times between such crises follow an exponential distribution that does not change over time:
$L(\lambda | \mathbf{T}) = \exp[−S_k / \lambda ] / \lambda^{k-1}$
where $$\mathbf{T}$$ = the vector consisting of $$T_1, ..., T_k$$, and
$S_k = \sum_{i=1}^k{T_i}.$
The exponential distribution is the simplest lifetime distribution. It is widely used for applications like this and seems reasonable in this context.
[For setting math in RMarkdown, we are following Cosma Shalizi (2016) “Using R Markdown for Class Reports”.]
We code this as follows:
Lik <- function(lambda, Times=yearsBetween){
Lik <- (exp(-sum(Times)/lambda) /
(lambda^(length(Times)-1)))
Lik
}
From this, we compute the log(likelihood) as follows:
$l(\lambda | \mathbf{T}) = [(−S_k / \lambda) - (k-1)\log(\lambda)].$
We code this as follows:
logLk <- function(lambda, Times=yearsBetween){
logL <- (-sum(Times)/lambda -
(length(Times)-1)*log(lambda))
logL
}
By differentiating $$l$$ with respect to $$\lambda$$ or $$u = \log(\lambda)$$ or $$\theta = 1/\lambda$$, we get a score function that is zero when $$\lambda$$ is $$\sum T_i/(k-1)$$, where the “-1” comes from assuming that only the last of Times is censored.
The value of parameter(s) that maximize the likelihood is (are) called maximum likelihood estimates (MLEs), and it is standard to distinguish an MLE with a circumflex (^). We use this convention to write the following:
$\hat\lambda = \sum T_i / (k-1).$
This is commonly read “lambda hat”. We code it as follows:
(lambdaHat <- (sum(yearsBetween) /
(length(yearsBetween)-1)))
## [1] 58.70387
From Wilks’ theorem, we know that 2*log(likelihood ratio) is approximately Chi-squared with degrees of freedom equal to the number of parameters estimated, which is 1 in this case.
#(chisq2 <- qchisq(
# c(.8, .95, .99, .999, 1-1e-6), 1))
(chisq2 <- qchisq(c(.2, .05, .01, .001, 1e-6),
1, lower.tail=FALSE))
## [1] 1.642374 3.841459 6.634897 10.827566 23.928127
In reality, because of the questionable nature of our assumptions, we may wish to place less confidence in these numbers than what is implied by the stated confidence levels. However, we will not change these numbers but leave it to the reader to downgrade them as seems appropriate.
Also, in the following, we will mark the 80, 95, and 99 percent confidence intervals on the plots, leaving the more extreme tails for separate computations. For now, we want to plot 2*log(likelihood) in a neighborhood of the MLE. For this, we will focus on the region that is closer than chisq2/2 of the maximum:
lambda <- lambdaHat+seq(-50, 1000, 50)
(logLR2 <- 2*(logLk(lambdaHat) - logLk(lambda)))
## [1] 7.6716709 0.0000000 0.3123131 0.7288601 1.0993781 1.4201875 1.7000785
## [8] 1.9472942 2.1682390 2.3677539 2.5495187 2.7163710 2.8705341 3.0137775
## [15] 3.1475298 3.2729597 3.3910346 3.5025638 3.6082309 3.7086184 3.8042264
## [22] 3.8954878
After several attempts at adjusting the parameters for the seq function while including 0 in the sequence, I gave up trying to get a range with 0 inside and just over qchisq(0.99, 1) = 6.63 on both ends: Obviously, I got it for $$\lambda$$ small. However, it seemed infeasible to do this with only a relatively few points for $$\lambda$$ large: The MLE here is only the second of 22 evenly-spaced points on the $$\lambda$$ scale while the value for the 22nd point is not close to the target 6.63. Let’s try $$u = \log(\lambda)$$:
l_lam <- log(lambdaHat)+seq(-2, 6, 1)
(logLR2_u <- 2*(logLk(lambdaHat) - logLk(exp(l_lam))))
## [1] 8.7781122 1.4365637 0.0000000 0.7357589 2.2706706 4.0995741 6.0366313
## [8] 8.0134759 10.0049575
This seems more sensible, though still somewhat skewed, with the MLE as the third of 9 evenly-spaced points on the $$u$$ scale. What about $$\theta = 1/\lambda$$?
theta <- (1/lambdaHat + seq(-.016, .06, .004))
(logLR2_th <- 2*(logLk(lambdaHat) - logLk(1/theta)))
## [1] 3.72384302 1.02891730 0.32910245 0.06564557 0.00000000 0.04778785
## [7] 0.16923926 0.34241206 0.55390888 0.79494593 1.05945113 1.34305135
## [13] 1.64249230 1.95528667 2.27949060 2.61355602 2.95623001 3.30648425
## [19] 3.66346432 4.02645258
This looks worse: With $$\hat\theta = 1/\hat\lambda = 0.0178$$, it seems infeasible to get a sequence of only a few equally-spaced positive numbers that include 0 and still produce numbers just over qchisq(0.99, 1) = 6.63 on either ends, as we created on the $$\lambda$$ scale, let alone both as we have on the $$u$$ scale.
This suggests we should parameterize our analysis in terms of $$u = \log(\lambda)$$. To confirm this, let’s create plots on all three scales, starting with $$u$$.
However, to save space on CRAN, we will not plot them by default; to see the plots, a user will need to manually set makePlots <- TRUE:
makePlots <- FALSE
library(grDevices)
outType <- ''
#outType = 'png'
switch(outType,
svg=svg('yrs2Armageddon.svg'),
# need png(..., 960, 960), because the default 480
# is not sufficiently clear to easily read the labels
png=png('yrs2Armageddon.png', 960, 960)
)
op <- par(mar=c(6, 4, 4, 2)+.1)
# Experiment with the range of "seq" here until
# head and tail of logLR2_u. are just over 6.63:
u. <- log(lambdaHat)+seq(-1.86, 4.36, .02)
lam. <- exp(u.)
logLR2_u. <- 2*(logLk(lambdaHat) - logLk(lam.))
head(logLR2_u., 1)
## [1] 7.127474
tail(logLR2_u., 1)
## [1] 6.745557
if(makePlots){
plot(lam., logLR2_u., type='l', bty='n', log='x',
xlab='', ylab='', las=1, axes=FALSE, lwd=2)
axis(2, las=1)
# xlab = \lambda:
# Greek letters did not render in GIMP 2.10.8 on 2018-12-30,
# so don't use svg until this is fixed.
switch(outType,
# cex doesn't work properly with svg > GIMP
# Therefore, I can NOT use svg
svg={cex2 <- 2; mtext('lambda', 1, 1.6, cex=cex2)},
png={cex2 <- 2; mtext(expression(lambda), 1,
1.6, cex=cex2)},
{cex2 <- 1.3; mtext(expression(lambda), 1,
1.6, cex=cex2)}
)
lamTicks <- axTicks(1)
thTicks <- 1/lamTicks
switch(outType,
svg=mtext('theta == 1/lambda', 1, 4.9, cex=cex2),
mtext(expression(theta == 1/lambda), 1, 4.9, cex=cex2)
)
abline(h=chisq2, col='red', lty=c('dotted', 'dashed'),
lwd=2)
(CI.8 <- range(lam.[logLR2_u. <= chisq2[1]]))
text(lambdaHat, chisq2[1],
paste0('80% CI =\n(',
paste(round(CI.8), collapse=', '), ')'),
cex=cex2)
(CI.95 <- range(lam.[logLR2_u. <= chisq2[2]]))
text(lambdaHat, chisq2[2],
paste0('95% CI =\n(',
paste(round(CI.95), collapse=', '), ')'),
cex=cex2)
abline(v=CI.8, col='red', lty='dotted', lwd=2)
abline(v=CI.95, col='red', lty='dashed', lwd=2)
(CI.99 <- range(lam.[logLR2_u. <= chisq2[3]]))
text(lambdaHat, chisq2[3],
paste0('99% CI =\n(',
paste(round(CI.99), collapse=', '), ')'),
cex=cex2)
abline(v=CI.8, col='red', lty='dotted', lwd=2)
abline(v=CI.95, col='red', lty='dashed', lwd=2)
abline(v=CI.99, col='red', lty='dashed', lwd=2)
if(outType != '')dev.off()
par(op)
}
Let’s produce this same plot without the log scale for $$\lambda$$:
# copy the code from the last snippet
# and delete "log='x'", then adjust the placement
# of CI text
switch(outType,
svg=svg('yrs2Armageddon_lin.svg'),
# need png(..., 960, 960), because the default 480
# is not sufficiently clear to easily read the labels
png=png('yrs2Armageddon_lin.png', 960, 960)
)
op <- par(mar=c(6, 4, 4, 2)+.1)
u. <- log(lambdaHat)+seq(-1.86, 4.36, .02)
lam. <- exp(u.)
logLR2_u. <- 2*(logLk(lambdaHat) - logLk(lam.))
head(logLR2_u., 1)
## [1] 7.127474
tail(logLR2_u., 1)
## [1] 6.745557
if(makePlots){
plot(lam., logLR2_u., type='l', bty='n',
xlab='', ylab='', las=1, axes=FALSE, lwd=2)
axis(2, las=1)
# xlab = \lambda:
# Greek letters did not render in GIMP 2.10.8 on 2018-12-30,
# so don't use svg until this is fixed.
switch(outType,
# cex doesn't work properly with svg > GIMP
# Therefore, I can NOT use svg
svg={cex2 <- 2; mtext('lambda', 1, 1.6, cex=cex2)},
png={cex2 <- 2; mtext(expression(lambda), 1,
1.6, cex=cex2)},
{cex2 <- 1.3; mtext(expression(lambda), 1,
1.6, cex=cex2)}
)
lamTicks <- axTicks(1)
thTicks <- 1/lamTicks
switch(outType,
svg=mtext('theta == 1/lambda', 1, 4.9, cex=cex2),
mtext(expression(theta == 1/lambda), 1, 4.9, cex=cex2)
)
abline(h=chisq2, col='red', lty=c('dotted', 'dashed'),
lwd=2)
(CI.8 <- range(lam.[logLR2_u. <= chisq2[1]]))
#text(lambdaHat, chisq2[1],
text(400, chisq2[1],
paste0('80% CI =\n(',
paste(round(CI.8), collapse=', '), ')'),
cex=cex2)
(CI.95 <- range(lam.[logLR2_u. <= chisq2[2]]))
#text(lambdaHat, chisq2[2],
text(800, chisq2[2],
paste0('95% CI =\n(',
paste(round(CI.95), collapse=', '), ')'),
cex=cex2)
abline(v=CI.8, col='red', lty='dotted', lwd=2)
abline(v=CI.95, col='red', lty='dashed', lwd=2)
(CI.99 <- range(lam.[logLR2_u. <= chisq2[3]]))
#text(lambdaHat, chisq2[3],
text(3000, chisq2[3],
paste0('99% CI =\n(',
paste(round(CI.99), collapse=', '), ')'),
cex=cex2)
abline(v=CI.8, col='red', lty='dotted', lwd=2)
abline(v=CI.95, col='red', lty='dashed', lwd=2)
abline(v=CI.99, col='red', lty='dashed', lwd=2)
if(outType != '')dev.off()
}
par(op)
The plot vs. $$\log(\lambda)$$ is obviously skewed, but this linear plot is vastly worse.
What about linear in $$\theta = 1/\lambda$$?
switch(outType,
svg=svg('yrs2Armageddon_inverse.svg'),
png=png('yrs2Armageddon_inverse.png', 960, 960)
)
op <- par(mar=c(6, 4, 4, 2)+.1)
# This will require more changes than just deleting log='x':
u. <- log(lambdaHat)+seq(-1.86, 4.36, .02)
lam. <- exp(u.)
logLR2_u. <- 2*(logLk(lambdaHat) - logLk(lam.))
head(logLR2_u., 1)
## [1] 7.127474
tail(logLR2_u., 1)
## [1] 6.745557
if(makePlots){
plot(-1/lam., logLR2_u., type='l', bty='n',
xlab='', ylab='', las=1, axes=FALSE, lwd=2)
thTicks <- (-axTicks(1))
axis(2, las=1)
# xlab = \lambda:
# Greek letters did not render in GIMP 2.10.8 on 2018-12-30,
# so don't use svg until this is fixed.
switch(outType,
# cex doesn't work properly with svg > GIMP
# Therefore, I can NOT use svg
svg={cex2 <- 2; mtext('lambda', 1, 1.6, cex=cex2)},
png={cex2 <- 2; mtext(expression(lambda), 1,
1.6, cex=cex2)},
{cex2 <- 1.3; mtext(expression(lambda), 1,
1.6, cex=cex2)}
)
switch(outType,
svg=mtext('theta == 1/lambda', 1, 4.9, cex=cex2),
mtext(expression(theta == 1/lambda), 1, 4.9, cex=cex2)
)
abline(h=chisq2, col='red', lty=c('dotted', 'dashed'),
lwd=2)
(CI.8 <- range(lam.[logLR2_u. <= chisq2[1]]))
#text(lambdaHat, chisq2[1],
text(-.02, chisq2[1],
paste0('80% CI =\n(',
paste(round(CI.8), collapse=', '), ')'),
cex=cex2)
(CI.95 <- range(lam.[logLR2_u. <= chisq2[2]]))
#text(lambdaHat, chisq2[2],
text(-.04, chisq2[2],
paste0('95% CI =\n(',
paste(round(CI.95), collapse=', '), ')'),
cex=cex2)
abline(v=CI.8, col='red', lty='dotted', lwd=2)
abline(v=CI.95, col='red', lty='dashed', lwd=2)
(CI.99 <- range(lam.[logLR2_u. <= chisq2[3]]))
#text(lambdaHat, chisq2[3],
text(-.06, chisq2[3],
paste0('99% CI =\n(',
paste(round(CI.99), collapse=', '), ')'),
cex=cex2)
abline(v=-1/CI.8, col='red', lty='dotted', lwd=2)
abline(v=-1/CI.95, col='red', lty='dashed', lwd=2)
abline(v=-1/CI.99, col='red', lty='dashed', lwd=2)
if(outType != '')dev.off()
}
par(op)
Clearly, we don’t want to mess with the $$\theta = 1/\lambda$$ scale, and $() seems the best for understanding what’s happening here. ## Monte Carlo the time between major nuclear crises If we had a probability distribution for $$\lambda$$, $$u = \log(\lambda)$$, or $$\theta = 1/\lambda$$, we could simulate that. To get such, we recall that one statement of Bayes’ theorem is that the posterior is proportional to the likelihood times the prior. However, what should we use as a prior? The Wikipedia article on the exponential distribution describes several more or less standard priors for that distribution. There’s not just one, and they all seem more complicated than what we need here. Instead, we will use the improper prior that is uniform in $$u = \log(\lambda)$$. To support this, we note that the exponential distribution is closer to the lognormal than to a normal, and the distribution of the reciprocal of an exponential random variable is even farther from normal, as we see in the following simulation: set.seed(1) simExp <- rexp(1000) if(makePlots){ qqnorm(simExp, datax=TRUE) qqnorm(simExp, datax=TRUE, log='x') qqnorm(1/simExp, datax=TRUE) } Let’s rewrite the above likelihood in terms of $$u$$: $L(u | \mathbf{T}) = \exp[−S_k e^{-u} - (k-1)u].$ With an improper prior locally uniform in $$u = \log(\lambda)$$, we get the following: $P(a < \lambda \leq b | \mathbf{T}) \propto \int_{\log(a)}^{\log(b)}{\exp[-S_k e^{-u}] e^{-(k-1)u}du}$ Let’s transform this back by replacing $$u$$ with $$\lambda = e^u$$: $P(a < \lambda \leq b | \mathbf{T}) \propto \int_a^b \exp[-S_k / \lambda] \lambda^{-k} d\lambda$ This says that the posterior for $$\lambda$$ follows an inverse-gamma distribution with shape parameter $$(k-1)$$ and scale $$S_k$$. The moments for this distribution are as follows: $\mathbb{E}(\lambda^r | \mathbf{T}) = S_k^r \Gamma(k-1-r) / \Gamma(k-1).$ If $$(k-1-r)$$ is an integer less than 1, this is infinite, which it is for $$k$$ = 2 and $$r$$ = 1, the case of most interest here. This, in turn, means that the sample moments of real or Monte Carlo data will be highly erratic. This also elevates the priority for increasing $$k$$ by considering a larger list of nuclear crises, as previously mentioned. Functions to compute the density, cumulative probability distribution (CDF), quantiles, and random numbers for this distribution are available in the CRAN package invgamma. We will use those in the following. library(invgamma) set.seed(123) rlambda2 <- function(n, sumTimes=lambdaHat){ # -sumTimes/log(runif(n)) # k <- 2; rinvgamma(n k-1, scale=sumTimes) rinvgamma(n, 1, rate=sumTimes) } simLam <- rlambda2(1e4) quantile(simLam) ## 0% 25% 50% 75% 100% ## 5.610946e+00 4.289177e+01 8.642446e+01 2.046092e+02 2.515467e+05 mean(simLam) ## [1] 458.0757 This distribution is obviously highly skewed as expected, with a mode (MLE) at 56 years and a mean estimated here at 439 ## Probability that a major nuclear war might lead to the extinction of civilization The analysis in the Wikiversity article on “time to extinction of civilization” includes estimated of the probability that a major nuclear crisis like the 1962 Cuban Missile Crisis or the 1983 Soviet nuclear false alarm incident would lead to a major nuclear war. The numbers given there ranged from 0.3 to 0.6 with a typical number of 0.45. For present purposes, we shall assume that (0.3, 0.6) represent an 80 percent, equal tail confidence interval of a beta distribution and estimate its two shape parameters, $$\alpha$$ and $$\beta$$. Doing this requires an iteration, because no simple formula exists for this. We want to shape1 = $$\alpha$$ and shape2 = $$\beta$$ to satisfy the following: 0.1 = pbeta(0.3, shape1, shape2) 0.9 = pbeta(0.6, shape1, shape2) The most reliable way to solve equations like these is to convert this into a minimization problem and use something like optim: Dev2 <- function(shapes, p=c(.3, .6), q=c(.1, .9)){ devs <- (q - pbeta(p, shapes[1], shapes[2])) sum(devs^2) } # test Dev2(c(1, 1)) ## [1] 0.13 The beta distribution with parameters (1, 1) is just the uniform distribution. Manual computation shows that this is the correct answer for this case. (betaSolve <-optim(c(1,1), Dev2, method="L-BFGS-B", lower=c(0,0))) ##$par
## [1] 7.940077 9.757428
##
## $value ## [1] 3.313248e-14 ## ##$counts
## 16 16
##
## $convergence ## [1] 0 ## ##$message
## [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
What’s the mean of this distribution?
Recall that the mean of the $$B(\alpha, \beta)$$ distribution is $$\alpha / (\alpha+\beta)$$, and its variance is as follows:
$\mathrm{var}(Q) = \alpha\beta / [(\alpha+\beta)^2 (\alpha+\beta+1)]$
a.b <- sum(betaSolve$par) (meanBeta <- betaSolve$par[1]/a.b)
## [1] 0.4486552
(varBeta <- with(betaSolve, par[1]*par[2] /
(a.b^2 * (a.b+1))))
## [1] 0.01322977
That’s quite close to the representative value of 0.45 discussed in the companion Wikiversity article on “time to extinction of civilization”.
## Monte Carlo time to extinction of civilization
This section will start with a function to generate N random times to Armageddon as follows:
1. start timing
2. Generate N random variates Q ~ $$B(\alpha, \beta)$$ indicating the probability that each simulated crisis in a sequence would produce a nuclear Armageddon.
3. From this, generate N random variables K ~ $$NB(Q, 1)$$ indicating the number of simulated crises in a series required to produce one nuclear Armageddon with Q[i] = probability of each being the last, for i = 1, ..., N.
4. For each i, compute Time[i] <- sum(rlambda2(K[i])).
5. compute elapsed.time
6. Return (Time, gammapars, gammaGOF, elapsed.time)
First do this with set.seed(1), N=10 and time the result. Then set.seed(j), N=10^j, j = 2, 3, …, timing each one. Save the results until the time gets too long to continue or we get to N = 1e7.
mcArmageddon <- function(N,
betapars=betaSolve$par, sumTimes=lambdaHat){ # 1. Start time start <- proc.time() # 2. Q ~ B(\alpha, \beta) Q <- rbeta(N, betapars[1], betapars[2]) # 3. K ~ NB(Q, 1) K <- (1+rnbinom(N, 1, Q)) # 4. Time[i] <- sum(rlambda2(K[i])) Time <- numeric(N) for(i in 1:N){ Time[i] <- sum(rlambda2(K[i], sumTimes=sumTimes)) } attr(Time, 'Qbar') <- mean(Q) attr(Time, 'quantileQ') <- quantile(Q) attr(Time, 'Kbar') <- mean(K) attr(Time, 'quantileK') <- quantile(K) # 5. quantiles cat('meanTime = ', mean(Time), '\n') print(quantile(Time)) # 6. elapsed.time et <- (proc.time()-start) # 7. Return et as an attribute attr(Time, 'elapsed.time') <- et cat('et = ', et, '\n') Time } set.seed(1) (mcArm1 <- mcArmageddon(10)) ## meanTime = 466.7534 ## 0% 25% 50% 75% 100% ## 18.25182 211.57416 243.10821 599.37997 1875.07414 ## et = 0.002 0 0.002 0 0 ## [1] 244.04749 203.08854 242.16893 237.03103 1875.07414 359.94887 ## [7] 785.11949 23.61287 18.25182 679.19034 ## attr(,"Qbar") ## [1] 0.4652404 ## attr(,"quantileQ") ## 0% 25% 50% 75% 100% ## 0.3381862 0.3763977 0.4832317 0.5115655 0.6668375 ## attr(,"Kbar") ## [1] 2.7 ## attr(,"quantileK") ## 0% 25% 50% 75% 100% ## 1.0 1.0 2.5 3.0 9.0 ## attr(,"elapsed.time") ## user system elapsed ## 0.002 0.000 0.002 This all looks sensible. Let’s try larger sample sizes: set.seed(2) mcArm2 <- mcArmageddon(100) ## meanTime = 1284.154 ## 0% 25% 50% 75% 100% ## 20.47838 122.77016 253.82983 707.74738 46207.17853 ## et = 0.002 0 0.001 0 0 attributes(mcArm2) ##$Qbar
## [1] 0.4447869
##
## $quantileQ ## 0% 25% 50% 75% 100% ## 0.1327921 0.3396490 0.4205039 0.5277240 0.7451057 ## ##$Kbar
## [1] 2.5
##
## $quantileK ## 0% 25% 50% 75% 100% ## 1 1 2 3 18 ## ##$elapsed.time
## user system elapsed
## 0.002 0.000 0.001
set.seed(3)
mcArm3 <- mcArmageddon(1000)
## meanTime = 856.0125
## 0% 25% 50% 75% 100%
## 9.156532e+00 8.640038e+01 2.449606e+02 6.936085e+02 1.127642e+05
## et = 0.007 0 0.007 0 0
attributes(mcArm3)
## $Qbar ## [1] 0.4454203 ## ##$quantileQ
## 0% 25% 50% 75% 100%
## 0.1536354 0.3622609 0.4477545 0.5233040 0.7677098
##
## $Kbar ## [1] 2.441 ## ##$quantileK
## 0% 25% 50% 75% 100%
## 1 1 2 3 16
##
## $elapsed.time ## user system elapsed ## 0.007 0.000 0.007 N = 1000 still takes only 0.009 seconds. set.seed(4) mcArm4 <- mcArmageddon(1e4) ## meanTime = 1470.756 ## 0% 25% 50% 75% 100% ## 6.892384e+00 8.081736e+01 2.318426e+02 6.521557e+02 1.831200e+06 ## et = 0.067 0.006 0.074 0 0 attributes(mcArm4) ##$Qbar
## [1] 0.4486821
##
## $quantileQ ## 0% 25% 50% 75% 100% ## 0.1097186 0.3672180 0.4469580 0.5286158 0.8215966 ## ##$Kbar
## [1] 2.4073
##
## $quantileK ## 0% 25% 50% 75% 100% ## 1 1 2 3 34 ## ##$elapsed.time
## user system elapsed
## 0.067 0.006 0.074
The time was still only 0.074 seconds, so let’s try N=1e5:
set.seed(5)
mcArm5 <- mcArmageddon(1e5)
## meanTime = 2465.218
## 0% 25% 50% 75% 100%
## 5.059301e+00 8.391303e+01 2.282285e+02 6.562214e+02 7.826275e+07
## et = 0.634 0.038 0.672 0 0
attributes(mcArm5)
## $Qbar ## [1] 0.447376 ## ##$quantileQ
## 0% 25% 50% 75% 100%
## 0.06245706 0.36597625 0.44482916 0.52712949 0.90567388
##
## $Kbar ## [1] 2.41038 ## ##$quantileK
## 0% 25% 50% 75% 100%
## 1 1 2 3 38
##
## $elapsed.time ## user system elapsed ## 0.634 0.038 0.672 The time was still only 0.535 – well under 10 times N= 1e4. What about a million? set.seed(6) mcArm6 <- mcArmageddon(1e6) ## meanTime = 3623.17 ## 0% 25% 50% 75% 100% ## 4.662591e+00 8.403335e+01 2.287426e+02 6.521607e+02 1.369962e+09 ## et = 5.794 0.18 5.978 0 0 attributes(mcArm6) ##$Qbar
## [1] 0.44865
##
## $quantileQ ## 0% 25% 50% 75% 100% ## 0.05289224 0.36735541 0.44658543 0.52786836 0.91813004 ## ##$Kbar
## [1] 2.404252
##
## $quantileK ## 0% 25% 50% 75% 100% ## 1 1 2 3 52 ## ##$elapsed.time
## user system elapsed
## 5.794 0.180 5.978
This too just over 5 seconds.
For the Wikiversity article on “Time to extinction of civilization”, we’d like the percentages of these times that are less than 40 and 60 years, representing roughly the remaining lives of half the people currently alive today and the time remaining in the twenty-first century as of this writing, as well as the quantiles of one in a million and one in a thousand chances:
mean(mcArm6)
## [1] 3623.17
mean(mcArm6<40)
## [1] 0.105078
mean(mcArm6<60)
## [1] 0.178196
quantile(mcArm6, c(1e-6, 1e-3))
## 0.0001% 0.1%
## 4.767077 9.577582
Let’s see if we can generate 1e7 random times in, hopefully, just over 50 seconds:
if(!fda::CRAN()){
# Don't run this with CRAN tests,
# because it takes too long
set.seed(7)
mcArm7 <- mcArmageddon(1e7)
print(attributes(mcArm7))
print(mean(mcArm7))
print(quantile(mcArm7, c(1e-6, 1e-3)))
}
## meanTime = 2903.54
## 0% 25% 50% 75% 100%
## 3.860435e+00 8.395275e+01 2.285149e+02 6.536271e+02 2.099839e+09
## et = 55.877 2.117 58.067 0 0
## $Qbar ## [1] 0.4486429 ## ##$quantileQ
## 0% 25% 50% 75% 100%
## 0.03291029 0.36729410 0.44656173 0.52794313 0.93453413
##
## $Kbar ## [1] 2.406332 ## ##$quantileK
## 0% 25% 50% 75% 100%
## 1 1 2 3 79
##
## $elapsed.time ## user system elapsed ## 55.877 2.117 58.067 ## ## [1] 2903.54 ## 0.0001% 0.1% ## 4.656742 9.607014 Let’s make a normal probability plot of log(mcArm7). With this many points, the standard qqnorm function can take a long time creating the plot. Let’s start by sorting the points as a separate step: if(fda::CRAN()){ mcArm. <- mcArm6 } else mcArm. <- mcArm7 mcArm.s <- sort(mcArm.) quantile(mcArm.s) ## 0% 25% 50% 75% 100% ## 3.860435e+00 8.395275e+01 2.285149e+02 6.536271e+02 2.099839e+09 Next, let’s call qqnorm without plotting: str(qq7 <-as.data.frame(qqnorm(mcArm.s, plot.it=FALSE))) ## 'data.frame': 10000000 obs. of 2 variables: ##$ x: num -5.33 -5.12 -5.03 -4.96 -4.91 ...
## \$ y: num 3.86 4.09 4.4 4.41 4.49 ...
Let’s cut the data down to the first and last 10 plus 9 of the next 90 from each end plus 1 percent of the rest:
N. <- length(mcArm.s)
#index.5 <- c(1:10, seq(20, 100, 10),
# seq(200, 1000, 100),
# seq(3000, (N./2)-2000, 1000))
index.5 <- c(1:1000,
seq(2000, (N./2)-2000, 1000))
index <- c(index.5, N.+1-rev(index.5))
tail(index, 30)
## [1] 9999971 9999972 9999973 9999974 9999975 9999976 9999977 9999978
## [9] 9999979 9999980 9999981 9999982 9999983 9999984 9999985 9999986
## [17] 9999987 9999988 9999989 9999990 9999991 9999992 9999993 9999994
## [25] 9999995 9999996 9999997 9999998 9999999 10000000
length(index)
## [1] 11994
# yes: I think I did this right.
switch(outType,
svg=svg('yrs2ArmageddonQQ.svg'),
# need png(..., 960, 960),
# because the default 480 is not sufficiently
# clear to easily read the labels
png=png('yrs2ArmageddonQQ.png', 960, 960)
)
op <- par(mar=c(5, 5, 4, 5)+.1)
if(makePlots){
with(qq7, plot(y[index], x[index], type='l',
log='x', las=1, bty='n', lwd=2,
xlab='', ylab='',
cex.lab=2, axes=FALSE) )
# xlab='years to Armageddon',
# ylab='standard normal scores',
axis(1, cex.axis=2)
axis(2, cex.axis=2, las=1)
probs <- c(.001, .01, .1, .25, .5, .75,
.9, .99, .999)
z <- qnorm(probs)
if(outType==''){
cex.txt <- 1.5
cex.ax4 <- 1.3
} else {
cex.txt <- 3
cex.ax4 <- 2
}
axis(4, z, probs, cex.axis=cex.ax4,
las=1, line=-.5)
p40 <- mean(mcArm.s<40)
p60 <- mean(mcArm.s<60)
z40.60 <- qnorm(c(p40, p60))
max7 <- tail(mcArm.s, 1)
lines(c(rep(40, 2), max7),
c(-5, rep(z40.60[1], 2)),
lty='dotted', lwd=2, col='red')
lines(c(rep(60, 2), max7),
c(-5, rep(z40.60[2], 2)),
lty='dashed', lwd=2, col='purple')
text(15, -5, '40', col='red', cex=cex.txt)
text(200, -2.5, '60', col='purple', cex=cex.txt)
text(.2*max7, z40.60[1]-.6,
paste0(round(100*p40), "%"), cex=cex.txt,
col='red')
text(.2*max7, z40.60[2]+.6,
paste0(round(100*p60), "%"), cex=cex.txt,
col='purple')
}
par(op)
if(outType != '')dev.off() |
proofpile-shard-0030-85 | {
"provenance": "003.jsonl.gz:86"
} | # infinity of running couplings
+ 5 like - 0 dislike
93 views
A Landau pole - an infinity occurring in the running of coupling constants in QFT is a known phenomena. How does the Landau pole energy scale behave if we increase the order of our calculation, (more loops) especially in the case of Higgs quadrilinear coupling?
This post has been migrated from (A51.SE)
asked Nov 6, 2011
retagged Mar 7, 2014
I don't think there is a universal answer: you calculate the relevant beta function to higher order in the theory you are interested in, and see what happens. The relevant calculations are standard QFT ones, described in many textbooks.
This post has been migrated from (A51.SE)
Fine, Moshe, but there's still a well-defined question (implicitly contained above) whether the Landau pole of $\lambda\phi^4$ in $d=4$ may go away if we calculate it more accurately, e.g. to all orders or exactly non-perturbatively. It almost certainly doesn't go away but not so much due to explicit calculations but because there would have to be a UV fixed point that flows to the interacting scalar theory. It doesn't seem to exist - as far as we know the candidate theories - so the theory should disappear at some scale, the Landau pole.
This post has been migrated from (A51.SE)
Thank you for the answers. I was particularly interested in 1. if there is a chance that this problem does go away if treated non-perturbatively and if that does not happen 2. is it possible that it gets worse - the energy scale at which our theory breaks down gets smaller while including more orders of calculation. But as for the second question, as Moshe wrote, now I think it should depend on a particular case.
This post has been migrated from (A51.SE)
I don't have time for an answer now, but quick summary is that anything is possible. The complete running is encoded in the beta function, which has the answer to all your questions. One loop results only give you the first Taylor coefficient of that function at small coupling. This small piece of information is consistent with lots of different scenarios, including those you mention.
This post has been migrated from (A51.SE)
AAB, no doubt about it, the higher orders at least modify the speed with which the Landau pole is approaching. It can slow it down or speed it up. The scalar theory probably has to break down at some point but gauge theories may sometimes continue, via S-dualities, and one gets interesting "cascades of Seiberg dualities" where one may switch from a divergent coupling to an equivalent tiny one many times as the energy is being raised.
This post has been migrated from (A51.SE)
@Moshe: I respect your job and have nothing against it but you deliberately include my research results into "non professional" category.
This post has been migrated from (A51.SE)
Yes, I have. But, this is besides the point. None of your comments has anything to do with the question. You should not take any mention of the word “renormalization” as an invitation to start a discussion of your own issues with the subject.
This post has been migrated from (A51.SE)
+ 5 like - 0 dislike
The $\beta$ - function of a coupling determines its energy dependence. This in turn is a function of all the couplings in the theory, usually calculated in perturbation theory. So, things could be complicated for multi-dimensional coupling space.
For a single coupling, assume the one loop result is positive. This means that as long as the coupling is weak, it will grow with the energy scale. If you extrapolate that result way beyond its region of validity, you find that the coupling becomes infinite at some finite energy scale (but, long before that perturbation theory breaks down). This is such a fantastically high energy scale that this so-called Landua pole is an academic issue. Any QFT typically has energy range where it is useful as an effective field theory, and it is not typically valid or useful in such a huge range of energy scales. In any event, at these enormous energy scales quantum gravity is definitely relevant, and it is unlikely to be a quantum field theory at all. For these reasons the Landau pole is no longer a concern for most people, it was more of an issue when QFT was thought to be well-defined at all energy scales.
To your question, since the coupling becomes strong, pretty much anything can happen. It may be that the coupling does diverges at some energy scale (higher or lower than the initial estimate), though to make that statement with confidence you'd need to be able to calculate the $\beta$ - function at strong coupling. If this is the case, your QFT is an effective field theory defined only at sufficiently low energy scales.
It may also be that the $\beta$ - function gets some negative contributions and starts decreasing, whereas a zero becomes possible. When this happens the coupling constant increases initially, but stops running at some specific value. This is the scenario of UV fixed point, which makes the theory well-defined at all energy scales. In this case the problem, such as it is, indeed goes away.
This post has been migrated from (A51.SE)
answered Nov 8, 2011 by (2,395 points)
One can see M. Gell-Mann's interview about this in episode 53, although it is worth to watch episodes 50-55. http://www.webofstories.com/play/10607?o=MS
This post has been migrated from (A51.SE)
Interesting. Thanks for that.
This post has been migrated from (A51.SE)
+ 1 like - 0 dislike
Landau pole is not a mathematically consistent object. The reason relies on its derivation based on a few terms of a perturbative expansion. A typical case of this is provided by the scalar field. Just consider the following academic case
$$L=\frac{1}{2}(\partial\phi)^2-\frac{\lambda}{4}\phi^4.$$
This field has the following behaviors:
$$\beta(\lambda)=\frac{3^3\lambda^2}{4\pi^2}, \qquad \lambda\rightarrow 0$$
and, as proved by several authors (e.g. see http://arxiv.org/abs/1102.3906 and http://arxiv.org/abs/1011.3643),
$$\beta(\lambda)=4\lambda, \qquad \lambda\rightarrow\infty$$
This implies that, by a continuity argument, the Landau pole simply does not exist for the scalar field but this is anyhow trivial. The factor 4 in the infrared limit is indeed the space-time dimensionality.
This post has been migrated from (A51.SE)
answered Nov 8, 2011 by (345 points)
So $\beta (\lambda) \approx \frac{4\lambda^2}{ \lambda + \frac{16 \pi^2}{27}}$ is a good approximation for all $\lambda$? You wanted to say "the ultraviolet limit" here?
This post has been migrated from (A51.SE)
Yes, the first beta function is for the ultraviolet limit but I do not know the full beta function. We can only state the ones at limits. Yours is just a guess.
This post has been migrated from (A51.SE)
The factor 4 stands at $\lambda\to\infty$ and you call it "IR limit". But it is a strong coupling limit, isn't it?
This post has been migrated from (A51.SE)
Infrared limit and strong coupling limit are the same thing as the ultraviolet limit corresponds to the weak coupling limit. The factor 4 is the space-time dimensionality.
This post has been migrated from (A51.SE)
You omitted the mass term in order to simulate a non-Abelian gauge field, I guess. Another question, if the beta-function is known exactly, does that mean you can fulfill the renormalization exactly and get rid of all bare stuff? Can the exactly renormalized theory be the desired physical theory to deal with from the very beginning?
This post has been migrated from (A51.SE)
@Vladimir, we all know where your leading questions are leading. Your contributions are appreciated (for example, I think your previous comment here is correct), but I think you will have a more valuable and pleasant experience here if you stop trying to hijack threads and lead them to your "reformulation" issue. You will not be able to reach that particular destination here, there are not going to be any discussions of alternatives to established physics here.
This post has been migrated from (A51.SE)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverf$\varnothing$owThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. |
proofpile-shard-0030-86 | {
"provenance": "003.jsonl.gz:87"
} | Thursday
April 17, 2014
# Homework Help: Physical Science
Posted by James on Thursday, October 18, 2012 at 4:07pm.
A block of ice at 0^\circ {\rm C} is dropped from a height that causes it to completely melt upon impact. Assume that there is no air resistance and that all the energy goes into melting the ice. What is the height necessary for this to occur.
• Physical Science - Elena, Thursday, October 18, 2012 at 4:33pm
Q=λm
E=mgh
Q=E
λm= mgh
h=λ/g = 335000/9.8 = 34184 m
Related Questions
Chem104 - Calculate the specific heat $$(\rm{J/g \; ^\circ C})$$ for a 18.5-\(\...
chemistry - Calculate the specific heat (\rm{J/g \; ^\circ C}) for a 18.5-\rm g ...
chem - Calculate the specific heat (\rm{J/g \; ^\circ C}) for a 18.5-\rm g ...
chemistry - assuming complete dissociation of the solute, how many grams of \rm ...
chemistry - How many grams of water can be cooled from 35^\circ {\rm C} to 16^\...
chemistry - 25.6 {\rm mL} of ethanol (density =0.789 {\rm g}/{\rm mL}) initially...
chemistry - Calculate the amount of heat required to heat a 3.6{\rm kg} gold bar...
chemistry - Calculate the amount of heat required to heat a 3.7{\rm kg} gold bar...
chemistry - A sample of gas in a balloon has an initial temperature of 25\({\rm...
Chemistry - Four ice cubes at exactly 0 ^\circ {\rm{C}} having a total mass of ...
Search
Members |
proofpile-shard-0030-87 | {
"provenance": "003.jsonl.gz:88"
} | # Archive — hermiene.net
"Any road followed precisely to its end leads precisely nowhere. Climb the mountain just a little bit to test that it's a mountain. From the top of the mountain, you cannot see the mountain."
### February 13, 2005
Up until very recently, I have been using mIRC for all my IRC needs, but I've come to realize that XChat is very good. Not in the same way as having CSS3 readily available and supported everywhere would be very good, but in the same way as having CSS2.1 half-decently supported everywhere would be very good. The only complaints I have about the program is that there doesn't seem to be an option to turn angle brackets around nicks on (you have to hack them in yourself), and that there doesn't seem to be a way to remove the redundant space it appends to tab-completed nicks. And now for the good stuff. Something I've always wanted is for the logs to have a different timestamp format than the channel windows (or the private message windows, for that matter). XChat does that. Another pretty genius thing it does is grey out the nicks who are /away. And the logs are in UTF-8! Hooray! And finally, a client where marking text doesn't automatically copy it to the clipboard.
I finally managed to write some PHP code that extracts all occurrences of PHP code in a string (in particular, a string from a database), returns the evaluated code, and plugs it in at the right places. I love it when stuff works, especially when you've spent a lot of time on it and finally reach an epiphany where you understand the whole thing. Would you like to see it? I knew you would. :-)
function return_eval($code) { ob_start(); eval($code);
return ob_get_clean();
}
function return_php_output($arg) {$pattern = "/<?php ([^?]+) ?>/";
preg_match($pattern,$arg, $matches); while ($matches) {
$arg = preg_replace($pattern, return_eval($matches[1]),$arg, 1);
preg_match($pattern,$arg, $matches); } return$arg;
}
Here, have a go at this puzzle. Just read the instructions, click around, and experiment. Don't worry. You'll figure it out eventually.
I have managed to muster seven Wilburers. Keep them coming.
I got a hold of another Douglas Adams omnibus, The Ultimate Hitchhiker's Guide, which, to my delight, contains Mostly Harmless, as well as all the others that are already in The More Than Complete Hitchhiker's Guide. If this book-reading thing turns into an obsession, I might make a book reviews page. Hmmm... |
proofpile-shard-0030-88 | {
"provenance": "003.jsonl.gz:89"
} | # Equivalence of Definitions of Artinian Module
## Theorem
The following definitions of the concept of Artinian Module are equivalent:
### Definition 1
$M$ is a Artinian module if and only if:
$M$ satisfies the descending chain condition.
### Definition 2
$M$ is a Artinian module if and only if:
$M$ satisfies the minimal condition.
## Proof
### Definition 1 iff Definition 2
Let $D$ be the set of all submodules of $M$.
We shall show that:
descending chain condition
minimal condition
with respect to $\struct {D, \supseteq}$ are equivalent.
This is nothing but:
ascending chain condition
maximal condition
with respect to $\struct {D, \subseteq}$ are equivalent.
The latter follows from Increasing Sequence in Ordered Set Terminates iff Maximal Element.
$\blacksquare$ |
proofpile-shard-0030-89 | {
"provenance": "003.jsonl.gz:90"
} | ## USCAR Argues for Continued US Funding of Hydrogen Fuel Cell Vehicle Research
##### 30 July 2009
Projected hydrogen fuel cell system costs. Click to enlarge.
The United States Council for Automotive Research (USCAR) recently published a whitepaper on the importance of continued research of hydrogen as a low-carbon transportation solution, in the context of the proposed cutting of hydrogen fuel cell vehicle research in the Department of Energy FY2010 budget. (Earlier post.) The whitepaper is available for download on the USCAR website.
A separate interim report by the National Research Council (NRC) assessing the strategy and structure of the Department of Energy’s FreedomCAR and Fuel Partnership, also published in July, concluded that although the Obama Administration’s focus on nearer-term vehicle technologies to reduce petroleum fuel consumption and greenhouse gas emissions is on the right track, there remains a need for continued investment in longer-term, higher-risk, higher-payoff vehicle technologies that could be “highly transformational ” with regard to those twin concerns. In addition to advanced batteries, such technologies include systems for hydrogen storage and hydrogen fuel cells, the review panel said. (Earlier post.)
USCAR was founded in 1992 with the goal of strengthening the technology base of the US auto industry through cooperative research and development. USCAR is governed by the three-member USCAR Council, whose membership includes the R&D vice presidents from GM, Ford and Chrysler.
Use of electricity as an environmentally-friendly transportation ‘fuel’ is dependent on progress in on board energy storage (batteries and ultracapacitors) and improved electrical generation and distribution infrastructure. Even with complete success in meeting the USABC long-term goals for battery energy capacity, electric vehicles cannot compete with hydrogen-fueled vehicles for general usage in terms of range and ‘refill’ time. Use of hydrogen as a transportation fuel as on-board storage for useful range and refill time is already available (if not optimal), for use in both highly-efficient, dedicated internal combustion engines or Fuel Cells Vehicles (FCVs).
Because profitable high-volume deployment of FCVs depends on significant progress in multiple technologies both on and off the vehicle, the USCAR OEMs have made deployment of hybrid, plug-in hybrid and various forms of electric vehicles a near term focus. Most of the core technologies (battery, electric-drive systems, system controls) of these ‘electrified’ products will flow directly to fuel cell vehicles. Similarly, the DOE support for ‘grid-connected’ vehicles will indirectly support the ultimate commercialization of FCVs.
Regardless of their individual strategies, the USCAR members are firm in their belief that hydrogen-FCVs will be an important powertrain option in our future of sustainable transportation. Given the long-term nature of this investment and the many uncertainties surrounding the rebuilding of our national energy infrastructure, it is not prudent to pick de facto winning technologies by ending all support for research and development of FCVs.
—Hydrogen Research for Transportation: The USCAR Perspective
The whitepaper recaps some of the recent developments and successes in four areas fundamental to hydrogen fuel cell vehicles: fuel cells; hydrogen storage; hydrogen source pathways; and infrastructure.
While progress in fuel cells has tracked DOE and industry research projections for efficiency, cost reduction and durability improvement, there are still gaps to levels that would make fuel cell technology competitive with advanced combustion engines, USCAR notes.
Capacities of different H2 storage systems. Click to enlarge.
On the storage front, compressed hydrogen is adequate for many near and mid-term applications, though energy density and cost are still issues, USCAR says. Considerable progress has been made in the last 5 years to improve the storage density of hydrogen in vehicles; DOE and industry research has achieved roughly a doubling of stored capacity in advanced systems over the last 7 years.
The members recognize that continued research on material based storage systems is required in order to achieve performance and cost targets for the full range of U.S fleet model mix. The OEMs support the DOE approach to maintaining a research budget balanced across multiple material groups (metal hydrides, chemical hydrides and sorbents). A sustained effort utilizing DOE’s key technical resources such as the National Labs is required to ensure these new technologies reach commercial viability.
In terms of hydrogen production, the USCAR whitepaper says that while not the ultimate solution, steam methane reforming (SMR) can serve as the first of many future hydrogen production pathways.
Other pathways for producing hydrogen will exploit increasing availability of clean electricity, renewable feedstocks and carbon sequestration to drive down the carbon footprint of road transportation even as mass deployment of fuel cell vehicles begins. Just as for core fuel cell technologies, the research foundation for large-scale availability of clean hydrogen must be laid today, and DOE plays a central role in driving that research.
Given the longer-term nature of fuel cell vehicle commercializations, the OEMs do not consider a large, immediate investment in fueling infrastructure a high priority at this time, according to USCAR. However, some analyses suggest that the investment required to keep hydrogen availability well ahead of vehicle deployment so as to foster rapid adoption is modest.
A network of just 12,000 hydrogen stations would put hydrogen within two miles of 70% of the U.S. population (those living in the 100 largest metropolitan areas) and connect the major US metro areas with a hydrogen refueling station every 25 miles.
Continued government support of development over the next few years is very important to maintain stability of critical capabilities, maintain momentum and assure constant evolution of transportation fuel-cell technologies...developments relevant to stationary applications alone are far less likely to be applicable to vehicles.
Since it takes decades to “turn over” the light duty vehicle fleet, critical technologies must approach the point of commercialization in the next ten to fifteen years if they are to play a role in meeting our 2050 greenhouse gas reduction goals. DOE’s removal of support for transportation fuel-cell programs will dramatically diminish the US development of one of the truly zero-emission alternatives. Therefore, DOE should be encouraged to balance technology development priorities to include fuel cell vehicle technologies to assure that the current pace of development continues.
Resources
So, according to the chart, I should be able to go and buy a fuel cell that can pump out 100 kW for about $7,000. Anyone know where I can get one of those for that price and how many kWh it holds? Well if your willing to buy in bulk it should be easy. That figure is the mass production cost for 500k a year.. and note in 2010 they expect it to be 4500 bucks. As for kwh capacity.. They have 8 and 10 kg storage systems for suvs so thats around 128-160 kwh right now and if the fuel cells reach 75% as expected and the storage reaches its goals as expected they should be able to more then double that to near 400 kwh max in an suv sized truck. How can I put this charitably.... These guys are idiots, at least with respect to hydrogen fuel. I find it amusing that they cite Steam Methane Reforming (SMR) as an interim source of hydrogen. Hmm, how about instead just using the methane (or natural gas, same thing) as the fuel itself? That would solve the storage problems AND the infrastructure problems. Most of the improved efficiency is best accomplished with PHEV technology. And methane can be synthesized from renewable sources (solar wind) by electrolysis and the Sabatier reactor, so we're covered on that front at well. Maybe these people have just been staring at this problem a little too long. We just need a practical way to displace oil. PHEVs and methane (from NG, or biomass, or synthetic) is something we can do today with today's technology and will likely ALWAYS be cheaper than any hydrogen solution for vehicles. “highly transformational" my sphincter! Even if by some miracle fool-cells where as economical as an ICE, there's still the HUGE infrastructure cost. And the fact that fool-cells are no Greener than the energy expended to make H2. Plus fool-cells do not eliminate the need for batteries: a fool-cell car is a PHEV. A fool-cell is a constant current device, not constant voltage like a battery. The response time for a fool-cell to changing current requirement is measured in minutes. It is Battery research that's going to be “highly transformational" @Jim, You're partially right. The Pickens plan using wind energy to reserve NG for transportation is a very important step right now. However, eventually, H2 derived from wind and solar energy will be important for countries without NG nor sufficient waste biomass, such as Japan and the Middle East. Look at how Putin was puttin' the squeeze on Ukraine and Western Europe by cutting off the NG flow! @dursun, Have you driven a Honda FCX Clarity, lately? "Just one look is all it took" to fall in love with the concept! For just$600/month and you can lease it...If Honda thinks that you'd qualify...But after reading your insults after insults on H2, I'd bet they are gonna shake their heads.
I tend toward Jim's view about FC and NG.
Some FCs can operate with NG. That would avoid the cost of a national H infrastructure. NG is widely available. And we continue to discover large amounts domestically.
ICEs can use NG too, and companies will make the adaptation if there is a market.
The downside of beginning with NG is CO2. If CO2 is your top concern then the NGFC doesn't look as good. But neither does getting H from the SMR process.
So there will either be a cost for CO2 capture, or no CO2 capture, or the H will come from water (and be much more expensive).
You can reform CH4 on the car to H2 for PEM fuel cells. Daimler Chrysler reformed methanol to H2 on the NECAR series and it worked fine.
Methane (CH4) is easier to transport by pipelines and store. H2 at 10,000 psi scares the heck out of me, now matter how many reassuring words are spoken.
I can understand their position on H2 cars. It has been their grant money and careers for many years.
The recent cut backs did not stop research, it just requires doing the same thing with less money. That amounts to efficiency after many years of plenty. If they have done their up front work for 8 years it should be easy from here on.
The thing about h2 vs ng is with converting to h2 the COMPANY making the h2 has the dealie with dealing with the co2 NOT YOU. This makes it tons better for the car maker and you.
Also after we hit about 70 g/km required limits we are dealing with predomiently electric drive cars. Its FAR better to run ng to h2 to power then it is to run ng to powerplant and so on or to run ng through an ice engine and genset.
Also in places where ng isnt cheap but say wet enthanol is or coal or biomass or used spandex tutus... you can swap to other LOCAL feedstocks.
Besides in the end we will be dealing with a choice between a 40 hp carbon fiber minicar powered by ng with thousands in emmissions controls and a 10 speed transmission and so on or a 2 grand 150 hp fuel cell stack and whatever car you want.
For automobiles, diesel engines can be as efficient as fuel cells. The exhaust can be cleaned up without much loss of energy.
The production of diesel or DME from CO2 and nuclear energy eliminates the CO2 release from the equation.
If even a small range extender is built in. Lead batteries are good enough for plug in hybrid electric cars right now. They were good enough for a TZERO ten years ago.
If capital and fuel making costs are considered, combined cycle cogeneration for plug-in-hybrid charging will alway be superior to any hydrogen fuel cell.
The energy saving uses of the INNAS NOAX free piston engine have been ignored too long, but how are you going to sell a car to a guy if it only has a single cylinder.
In car combined cycle operation is also possible and very efficient.
..HG..
The issue is not whether NG is better than H2, it is a matter of priorities. If we want to a)become less dependent on Oil, and b) cut greenhouse gasses, the fastest approach is to implement current economically viable technologies -- NG. Long term H2 will work, but we have to decide where to put our money. Today, I would buy a NG vehicle. Ten years from now, maybe not.
Why put up with the complication of reforming CH4 onboard the car when a simple adsorptive H2 tank at a not-too-high pressure at 70gm/liter capacity is all that will be needed? Why use CH4 at all, when H2 can be made from solar and wind energy and waste biomass in one easy step?
Henry Gibson posted: "For automobiles, diesel engines can be as efficient as fuel cells. The exhaust can be cleaned up without much loss of energy."
I don't see how 42% efficiency of small diesel can compare with 60-70% efficiency of PEM FC. Diesel exhaust emission clean up is expensive, yet cannot even compare with gasoline engine, let alone to the ZEV standard of FCV's.
H2 is the simplest and most efficient way to make fuel out of renewable energy, and FC is the most efficient way to use that fuel. It's Clean, Pure and Simple!
Hydrogen fuel cell technology has too many limitations, inefficiencies, inapplicabilities and impracticalities. It's a technology that requires an inordinate amount of supervisory influence, that is, it will require professional supervision, maintenance and to store and distribute the hydrogen. This, I suspect, is why GM led the way in its fuel cell R&D: planned obsolescence.
Ask why GM is dropping its Saturn hybrid models and with pompous fanfare promoting the dubiously-named "range-extender" Chevy Volt. Does GM disapprove of consumers hearing the term 'plug-in'? Why is the Volt nearly double the cost and why is it a sports car? Since the automotive NiMh battery is perfected to reliably last 100,000-125,000 miles, why is it not made available for plug-in hybrid production models? Again, the answer to these questions is 'planned obsolescence'.
GM does NOT wish to make plug-in hybrids because they will last too long and thus need replacement less frequently. GM doesn't care about the lives that will be saved because they're safer cars and frankly, neither does our democratic party led Congress.
"It's a technology that requires an inordinate amount of supervisory influence, that is, it will require professional supervision, maintenance and to store and distribute the hydrogen."
And the other technologies don't????
"GM doesn't care about the lives that will be saved because they're safer cars and frankly, neither does our democratic party led Congress."
And a republican led Congress would????
With respect to Carbon emissions and NG vs. H2:
It would be MUCH cheaper to simply pay for carbon capture and sequestration from a stationary site than to make the fuel on a vehicle carbon-free. With respect to near-term H2 use, the only additive cost would be the capture, as the H2 supply would (supposedly) also have to deal with sequestration.
In the case of biomass or synthetically derived methane, since the CO2 output would be carbon-neutral - no net output.
Roger P: Even countries without NG would be better off using methane instead of H2, even if derived from electrolysis. Anyone can pull CO2 out of the air (Lackner, Keith) if they want to. There seems to be plenty of it around.....
@Roger Pham
have you taken your Meds lately?
I would have no problem with "high energy, rechargeable battery technology" that is not specific to hydrogen fuel cells. But to specify only one battery chemistry is neither good science nor good engineering. Just look at the comments as there is no consensus.
We just got out of eight years of politically directed 'engineering' that declared fuel-cells to be the winning technology. It killed the hybrids and existing battery technology electric vehicles and gave the hybrid electric market to Toyota and Honda. Only Ford had the good sense to carry on.
Well we had an election and thrown out the 'politically correct' and directed nonsense. It is time to return to science and engineering and let the 'Lysenko' support for hydrogen fool-cells compete in the natural world.
Bob Wilson
Um bob.. we lost that race in the 90s well before bush.
Well well. Look how our democratic party-led congress so quickly ponied up $2 billion to subsidize new car sales. I wonder how many hybrids were purchased on the first installment of$1 billion? Roughly 250,000 cars sold, say 1%, maybe 2,500, a liberal estimate.
Plug-in hybrid vehicles are simpler to maintain than hydrogen fuel cell vehicles and support infrastructure, ai vin. There are too many nerds giving the automobile-related industry all the cover it needs to prevent progress. Lemmings today are humans driving their new cars over a cliff, one after another.
The sales distribution for July should be available by August 5 if past practices are followed.
As for the hydrogen fool-cell programs, they were funded by defunding PNGV. There is a not a whole lot of difference between the GM Precept that someday will be the Chevy Volt. Although not a diesel hybrid, the Ford Prodigy is pretty much the R&D for their Escape and now Ford Fusion.
USCAR killed PNGV only Ford, Toyota and Honda didn't get the memo. There just aren't any serious hybrids but for these three manufacturers and two, GM and Chrysler declared bankruptcy. Hybrid skepticism by GM and Chrysler is just a symptom of the same thinking that a fool-cell program would stave off the inevitable. They've already lost credibility.
We do need a domestic, high-capacity battery capability and better battery technologies. But hydrogen fuel cells need to be considered as just one of many options.
Bob Wilson
@Sirkulat
You don't have to tell me that Plug-in hybrid vehicles are simpler to maintain than hydrogen fuel cell vehicles and support infrastructure. My comment was directed at your singling out the democratic party when it was a Republican led WhiteHouse that favored the fool cell program over such simpler solutions in the first place.
http://www.wired.com/science/discoveries/news/2002/01/49834
@Bob Wilson,
USCAR does not have to kill PNGV. Both types of vehicles will be needed to get us off oil dependency. HEV's are for right now, and FCV's are for a little bit into the future, but, the sooner the better.
The money spent on these vital research is just peanut, in comparison to the trillion dollars spent on the Iraq (Operation Iraqi Liberation, or OIL, for short!).
More money will need to be spent on enhancing America's technological edge and trade competitiveness instead of wasting our precious federal budget on pork-barrel projects. Patriotism over local self-interest!
Actualy uscar mostly was about the car not fuel cells.
And on h2 most of the money spent was on pipeline and storage and making making h2 cheaper which to be blunt we need to do anyway as it costs a bloody hell of alot of energy to make the h2 we make right now and we are making more every year.
What little was spent on actual fuel cells was mostly partial dod projects and forlifts and apus and busses NOT cars. And that is needed to keep the US companies ahead of the game so we dont lose yet anouther important industry.
Or would you rather bush hadnt pushed that work and we were looking at a bloody bad end to everything we make that depends on h2 to make it as peak ng comes along?
Actually, no, I don't have a problem with Bush pushing fuel cell research. What I have a problem with is that he used it as a smokescreen to kill the PNGV. Sure it had it's faults but PNGV was a seed that was bearing fruit. Junior chopped down that tree and planted one that wouldn't produce anything for 20 years. [Really, what did you expect from a WhiteHouse full of 'oil' people?]
The PNGV met the goal that was set: Produce a prototype for a 5 seat family car that gets 3X the current MPG [3 X 23mpg = 80mpg]. That the Big Three didn't think they could SELL a car like that wasn't the fault of the program, they could have expanded the goal of '3X the current MPG' to vehicles they could sell.
On another note; the PNGV goal was met through aerodynamics, lightweighting and downsizing of the power plant - techniques that will have to be used to even greater effect on any fuel cell car the USCAR program does end up producing. Why? Well because hydrogen has a lower energy density *per unit volume* than gasoline or diesel. Meaning a car using H2 can't carry a lot and will have to make the most of that it can to get the range we want. In other words, the car will have to be smaller, lighter and sleeker. And if the Big Three couldn't sell a car like that with a hybrid drivetrain what makes us think they could sell a car like that with a more costly fuel cell power source?
Why would they need to when uscar is already doing all that already? Again you assume uscar is all about the fuel cell and thus is all 20 years out yet most of it is about tech today/near term used in todays cars.
As for the h2 car... they already have 500 plus mile range fcevs. Many people see old articles on 100 mile range fcevs and assume thats todays test cars.
Whats realy holding up h2 fuel cell cars is they want it to be BETTER then gasoline cars before they go forward. Creating a usable car is one thing creating a better car is a whole new ballgame.
You are talking about the Toyota FCHV-adv, right? It gets that by using a 10,000 PSI tank carrying 156 litres of H2.
The comments to this entry are closed. |
proofpile-shard-0030-90 | {
"provenance": "003.jsonl.gz:91"
} | # The partial derivative
1. Feb 22, 2015
### Calpalned
1. The problem statement, all variables and given/known data
Find the partial derivative of $w = xe^\frac {y}{z}$.
2. Relevant equations
N/A
3. The attempt at a solution
$\frac{∂f}{∂x} = e^y/z$
$\frac{∂f}{∂y} = \frac{xe^y/z}{z}$
$\frac{∂f}{∂z} = (-yz^-2)(xe^yz^-1)$
Are theses correct? Thanks everyone.
Last edited by a moderator: Feb 22, 2015
2. Feb 22, 2015
### HallsofIvy
Staff Emeritus
Yes, they are- though poorly written. In Latex, to get more than one character in an exponent (or denominator or numerator, etc.) put them in "curly brackets"- { }.
"e^{y/z}" gives $e^{y/z}$.
Even better would be "e^{\frac{y}{z}}" which gives $e^{\frac{y}{z}}$
3. Feb 23, 2015
thanks |
proofpile-shard-0030-91 | {
"provenance": "003.jsonl.gz:92"
} | # What are the advanced electric and magnetic fields for an arbitrarily moving charge?
The retarded fields for a moving charge are:
$$\mathbf{E}(\mathbf{r}, t) = \frac{1}{4 \pi \varepsilon_0} \left(\frac{q(\mathbf{n} - \boldsymbol{\beta})}{\gamma^2 (1 - \mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|^2} + \frac{q \mathbf{n} \times \big((\mathbf{n} - \boldsymbol{\beta}) \times \dot{\boldsymbol{\beta}}\big)}{c(1 - \mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|} \right)_{t_r}$$
and
$$\mathbf{B}(\mathbf{r}, t) = \frac{\mu_0}{4 \pi} \left(\frac{q c(\boldsymbol{\beta} \times \mathbf{n})}{\gamma^2 (1-\mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|^2} + \frac{q \mathbf{n} \times \Big(\mathbf{n} \times \big((\mathbf{n} - \boldsymbol{\beta}) \times \dot{\boldsymbol{\beta}}\big) \Big)}{(1 - \mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|} \right)_{t_r} = \frac{\mathbf{n}(t_r)}{c} \times \mathbf{E}(\mathbf{r}, t)$$
What are the corresponding advanced fields?
• I've got to imagine that there is some information on this on the internet, or in standard E&M textbooks. Did you try checking any resources? Or did you make any attempt at a calculation? Mar 29 '18 at 22:11
• @DavidZ the advanced solution is mentioned, but generally dismissed as non-causal and not pursued further so only the retarded potentials are calculated. I've had a go myself, but I doubt it's correct: the directions of the fields opposite to the retarded case and quantities evaluated at advanced rather than retarded time. Mar 29 '18 at 22:17
• It would still be helpful to mention that you've checked a few sources, and which ones. And if you can include an overview of the calculation you tried (without making the question excessively long), even better. That helps avoid duplication of effort, and it might point to a conceptual misunderstanding or something that would really help future readers. Mar 29 '18 at 22:29
The 4-potential of a moving charge is given by$^1$
$$A^{\alpha}(x) = \frac {eV^{\alpha}(\tau)}{V\cdot[x - r(\tau)]}|_{\tau = \tau_0}$$
where $\tau_0$ is defined by the light-cone condition; $[x - r(\tau_0)]^2$ = 0, together with the retarded or advanced solution: $x_0 < r(\tau_0)$ or $x_0 > r(\tau_0)$.
Therefore the expressions for the retarded and advanced electric and magnetic fields only differ in either the retarded or advanced point being selected on the world-line of the moving charge, which is just replacing $t_r$ with $t_a$:
$$\mathbf{E}(\mathbf{r}, t) = \frac{1}{4 \pi \varepsilon_0} \left(\frac{q(\mathbf{n} - \boldsymbol{\beta})}{\gamma^2 (1 - \mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|^2} + \frac{q \mathbf{n} \times \big((\mathbf{n} - \boldsymbol{\beta}) \times \dot{\boldsymbol{\beta}}\big)}{c(1 - \mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|} \right)_{t_a}$$
$$\mathbf{B}(\mathbf{r}, t) = \frac{\mu_0}{4 \pi} \left(\frac{q c(\boldsymbol{\beta} \times \mathbf{n})}{\gamma^2 (1-\mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|^2} + \frac{q \mathbf{n} \times \Big(\mathbf{n} \times \big((\mathbf{n} - \boldsymbol{\beta}) \times \dot{\boldsymbol{\beta}}\big) \Big)}{(1 - \mathbf{n} \cdot \boldsymbol{\beta})^3 |\mathbf{r} - \mathbf{r}_s|} \right)_{t_a} = \frac{\mathbf{n}(t_a)}{c} \times \mathbf{E}(\mathbf{r}, t)$$
1 Classical Electrodynamics, Jackson, page 662 |
proofpile-shard-0030-92 | {
"provenance": "003.jsonl.gz:93"
} | # WordSmith Tools Manual
## Transcript
1 WordSmith Tools Manual Version 6.0 © 2015 Mike Scott Lexical Analysis Software Ltd. Stroud, Gloucestershire, UK
2 WordSmith Tools Manual version 6.0 by M ike Scott 2015
3 WordSmith Tools Manual © 2015 Mike Scott All rights reserved. But m ost parts of this work m ay be reproduced in any form or by any m eans - graphic, electronic, or m echanical, including photocopying, recording, taping, or inform ation storage and retrieval system s - usually without the written perm ission of the publisher. See http://www.lexically.net/publications/copyright_perm ission_for_screenshots.htm Products that are referred to in this docum ent m ay be either tradem arks and/or registered tradem arks of the respective owners. The publisher and the author m ake no claim to these tradem arks. While every precaution has been taken in the preparation of this docum ent, the publisher and the author assum e no responsibility for errors or om issions, or for dam ages resulting from the use of inform ation contained in this docum ent or from the use of program s and source code that m ay accom pany it. In no event shall the publisher and the author be liable for any loss of profit or any other com m ercial dam age caused or alleged to have been caused directly or indirectly by this docum ent. Produced: Decem ber 2015 Special thanks to: Publisher All the people who contributed to this document by testing Lexical Analysis Software Ltd. WordSmith Tools in its various incarnations. Especially those who reported problems and sent me suggestions.
4 WordSmith Tools Manual I Table of Contents Foreword 0 1 Part I WordSmith Tools 3 Overview Part II ... 4 1 Requirements ... 4 2 What's new in version 6 3 Controller ... 4 ... 5 4 Concord 5 KeyWords ... 6 6 ... 6 WordList 7 ... 6 Utilities ... 6 Character Profiler ... 7 CharGram s ... 7 Choose Languages ... 7 Corpus Corruption Detector ... 8 File Utilities ... 8 Splitter ... 8 File View er ... 8 Minim al Pairs Text Converter ... 9 ... 9 Version Checker View er and Aligner ... 11 ... 12 Webgetter ... 12 WSConcGram 13 Part III Getting Started 1 getting started with Concord ... 14 2 getting started with KeyWords ... 15 ... 17 3 getting started with WordList 20 Part IV Installation and Updating ... 21 1 installing WordSmith Tools ... 22 what your licence allows 2 ... 23 3 site licence defaults ... 24 4 version checking 26 Controller Part V ... 28 1 characters and letters ... 28 accents and other characters ... 28 w ildcards add notes ... 29 2 © 2015 Mike Scott
5 II Contents 3 adjust settings ... 29 advanced settings 4 ... 31 ... 39 batch processing 5 6 ... 42 choosing texts ... 42 text form ats ... 44 the file-choose w indow favourite texts ... 50 choosing files from standard dialogue box ... 51 7 8 ... 51 class or session instructions ... 51 9 colour categories 10 ... 60 colours ... 62 column totals 11 12 ... 63 compute new column of data ... 66 13 copy your results ... 66 14 count data frequencies 15 custom processing ... 67 16 ... 70 editing reduce data to n entries ... 70 ... 71 reverse-delete ... 71 delete or restore to end ... 71 delete if editing colum n headings ... 71 ... 72 editing a list of data find relevant files 17 ... 75 18 folder settings ... 78 ... 78 19 fonts 20 main settings ... 80 21 language ... 81 Overview ... 83 Language Chooser ... 84 ... 86 Other Languages Font ... 86 Sort Order ... 86 ... 87 saving your choices ... 87 22 layout & format 23 match words in list ... 92 ... 97 previous lists 24 ... 97 25 print and print preview ... 101 26 quit WordSmith ... 101 27 saving ... 101 save results ... 102 save as text ... 106 28 scripting searching ... 109 29 © 2015 Mike Scott
6 WordSmith Tools Manual III search for w ord or part of w ord ... 109 ... 109 search by typing search & replace ... 110 ... 111 selecting or marking entries 30 ... 111 selecting m ultiple entries ... 112 m arking entries ... 113 filenames tab 31 32 ... 113 settings ... 113 save defaults ... 115 restoring settings ... 116 33 source text(s) ... 120 34 stop lists 35 suspend processing ... 123 ... 125 text and languages 36 ... 126 37 text dates and time-lines ... 127 38 window management ... 128 word clouds 39 ... 129 40 zap unwanted lines 130 Tags and Markup Part VI ... 131 1 overview choices in handling tags ... 132 2 ... 133 3 custom settings tags as selectors ... 134 4 5 only if containing... ... 137 ... 138 6 part of file:selecting within texts 7 making a tag file ... 141 8 tag-types ... 145 9 start and end of text segments ... 146 ... 147 10 multimedia tags ... 149 11 modify source texts ... 153 12 XML text 157 Part VII Concord 1 purpose ... 158 ... 158 index 2 ... 159 3 what is a concordance? ... 159 4 search-word or phrase ... 159 search w ord syntax ... 161 file-based search-w ords ... 163 search-w ord and other settings ... 165 5 advice blanking ... 168 6 © 2015 Mike Scott
7 IV Contents 7 ... 168 Category or Set ... 168 set colum n categories ... 171 colour categories ... 175 8 clusters 9 Collocation ... 179 w hat is collocation? ... 179 collocate horizons ... 180 ... 180 collocation relationship ... 181 collocates display ... 183 collocates and lem m as ... 184 collocate follow ... 186 collocate highlighting in concordance ... 187 collocate settings re-sorting: collocates ... 189 ... 191 10 dispersion plot 11 concordancing on tags ... 198 ... 199 nearest tag ... 202 12 context word ... 206 editing concordances 13 ... 207 rem ove duplicates ... 207 14 patterns ... 208 15 re-sorting re-sorting: dispersion plot ... 210 ... 211 16 saving and printing sounds & video ... 212 17 obtaining sound and video files ... 215 ... 215 18 summary statistics 19 text segments in Concord ... 218 20 viewing options ... 220 21 WordSmith controller: Concord: settings ... 222 228 Part VIII KeyWords ... 229 1 purpose 2 index ... 229 ... 230 3 ordinary two word-list analysis 4 choosing files ... 232 ... 234 concordance 5 ... 235 6 example of key words ... 235 7 keyness ... 235 p value ... 236 key-ness definition ... 237 thinking about keyness ... 237 8 KeyWords database ... 238 creating a database key key-w ord definition ... 240 © 2015 Mike Scott
8 WordSmith Tools Manual V associates ... 241 associate def inition ... 243 ... 243 keyw ords database related clusters ... 243 clum ps ... 244 regrouping clumps 9 KeyWords: advice ... 244 10 ... 245 KeyWords: calculation 11 KeyWords clusters ... 246 ... 247 KeyWords: links 12 ... 249 13 make a word list from keywords data ... 249 14 plot display ... 251 plot calculation 15 re-sorting: KeyWords ... 252 16 the key words screen ... 252 17 WordSmith controller: KeyWords settings ... 254 257 WordList Part IX ... 258 purpose 1 ... 258 2 index ... 259 3 compare word lists ... 259 com pute key w ords com paring w ordlists ... 260 ... 261 com parison display merging wordlists 4 ... 262 5 consistency ... 262 ... 262 consistency analysis (range) detailed consistency analysis ... 263 ... 268 re-sorting: consistency lists detailed consistency relations ... 268 6 find filenames ... 269 ... 270 7 Lemmas (joining words) w hat are lem m as and how do w e join w ords? ... 270 m anual joining ... 271 ... 273 auto-joining lem m as ... 274 choosing lem m a file ... 276 8 WordList Index w hat is an Index for? ... 276 ... 276 m aking a WordList Index ... 278 index clusters ... 283 join clusters ... 284 index lists: view ing ... 286 index exporting ... 288 9 menu search ... 289 10 relationships between words ... 289 m utual inform ation and other relations ... 289 relationships display relationships com puting ... 294 © 2015 Mike Scott
9 VI Contents 11 recompute tokens ... 297 statistics ... 298 12 ... 298 statistics type/token ratios ... 303 ... 304 sum m ary statistics ... 306 stop-lists and match-lists 13 ... 307 import words from text list 14 15 ... 309 settings ... 310 WordSm ith controller: Index settings ... 311 WordSm ith controller: WordList settings ... 314 m inim um & m axim um settings ... 314 case sensitivity 16 sorting ... 315 ... 315 WordList and tags 17 ... 318 18 WordList display 322 Utility Programs Part X ... 323 1 Convert Data from Previous Versions ... 323 Convert Data from Previous Versions ... 324 2 WebGetter ... 324 overview ... 325 settings display ... 326 ... 328 lim itations 3 Corpus Corruption Detector ... 329 ... 329 Aim ... 329 How it w orks ... 331 Minimal Pairs 4 ... 331 aim ... 332 requirem ents ... 333 choosing your files output ... 336 ... 337 rules and settings running the program ... 338 ... 340 5 File Viewer ... 340 Using File View er ... 342 6 File Utilities index ... 342 ... 343 Splitter ... 343 Splitter: index ... 343 aim of Splitter ... 344 Splitter: f ilenames ... 345 Splitter: w ildcards ... 346 join text files ... 347 com pare tw o files ... 348 file chunker ... 348 find duplicates ... 349 renam e m ove files to sub-folders ... 351 © 2015 Mike Scott
10 WordSmith Tools Manual VII dates and tim es ... 352 ... 354 find holes in texts Text Converter 7 ... 354 purpose ... 354 ... 355 index settings ... 355 Text Converter: copy to ... 358 extracting from files ... 359 ... 360 filtering: m ove if ... 362 Convert w ithin the text file ... 362 conversion f ile ... 363 syntax ... 364 sample conversion f ile ... 365 Convert form at of entire text files Mark-up changes ... 368 ... 371 Word, Excel, PDF ... 372 non-Unicode Text ... 373 Other changes ... 374 Text Converter: converting BNC XML version ... 379 8 Viewer and Aligner ... 379 purpose ... 380 index ... 381 aligning w ith View er & Aligner ... 381 exam ple of aligning ... 384 aligning and m oving editing ... 385 ... 385 languages num bering sentences & paragraphs ... 386 ... 387 options ... 387 reading in a plain text ... 389 joining and splitting settings ... 390 ... 390 technical aspects translation m is-m atches ... 391 troubleshooting ... 391 unusual lines ... 392 9 WSConcGram ... 393 aim s ... 393 ... 394 definition of a concgram ... 395 settings ... 395 generating concgram s view ing concgram s ... 397 ... 402 filtering concgram s exporting concgram s ... 404 ... 405 10 Character Profiler ... 405 purpose ... 405 profiling characters ... 408 profiling settings ... 409 11 Chargrams ... 409 purpose ... 409 chargram procedure display ... 410 © 2015 Mike Scott
11 VIII Contents settings ... 413 415 Part XI Reference acknowledgements 1 ... 416 2 API ... 416 ... 417 3 bibliography bugs ... 417 4 change language ... 419 5 ... 419 6 Character Sets ... 419 overview ... 420 accents & sym bols ... 422 clipboard 7 ... 425 contact addresses 8 9 ... 425 date format ... 425 10 Definitions ... 425 definitions ... 427 w ord separators 11 ... 427 demonstration version ... 427 12 drag and drop ... 428 13 edit v. type-in mode ... 428 14 file extensions ... 430 finding source texts 15 ... 430 16 flavours of Unicode folders\directories ... 431 17 ... 433 18 formulae ... 435 19 history list 20 HTML, SGML and XML ... 435 21 hyphens ... 435 ... 436 22 international versions 23 limitations ... 437 tool-specific lim itations ... 437 ... 438 24 links between tools ... 439 25 keyboard shortcuts 26 machine requirements ... 440 ... 440 manual for WordSmith Tools 27 ... 441 28 menu and button options ... 444 29 MS Word documents ... 446 30 never used WordSmith before ... 446 31 numbers ... 446 32 plot dispersion value RAM availability ... 447 33 © 2015 Mike Scott
12 WordSmith Tools Manual IX 34 ... 447 reference corpus ... 447 35 restore last file ... 448 36 single words v. clusters 37 speed ... 448 38 ... 449 status bar 39 tools for pattern-spotting ... 450 40 version information ... 451 ... 452 Version 3 im provem ents ... 453 41 zip files 454 Part XII Troubleshooting 1 list of FAQs ... 455 ... 455 apostrophes not found 2 ... 455 3 column spacing ... 455 4 Concord tags problem ... 456 Concord/WordList mismatch 5 ... 456 6 crashed ... 456 7 demo limit ... 456 8 funny symbols illegible colours ... 457 9 ... 457 10 keys don't respond pineapple-slicing ... 458 11 12 printer didn't print ... 458 ... 458 13 too slow 14 won't start ... 458 15 word list out of order ... 459 460 Error Messages Part XIII ... 461 1 list of error messages ... 462 2 .ini file not found ... 462 3 administrator rights ... 463 4 base list error 5 can only save words as ASCII ... 463 ... 463 can't call other tool 6 ... 463 7 can't make folder as that's an existing filename ... 463 8 can't compute key words as languages differ ... 463 9 can't merge list with itself! ... 463 10 can't read file ... 464 11 character set reset to to suit concordance file is faulty ... 464 12 © 2015 Mike Scott
15 XII Contents 91 ... 476 truncating at xx words -- tag list file has more ... 476 92 two files needed 93 ... 476 unable to merge Keywords Databases ... 476 94 why did my search fail? ... 477 95 word list file is faulty 96 ... 477 word list file not found WordList comparison file is faulty 97 ... 477 ... 477 98 WordSmith Tools already running 99 WordSmith Tools expired ... 477 ... 477 100 WordSmith version mis-match ... 477 101 XX days left 478 Index © 2015 Mike Scott
16 WordSmith Tools Manual WordSmith Tools Section I
17 2 WordSmith Tools 1 WordSmith Tools WordSmith Tools is an integrated suite of programs for looking at how words behave in texts. You will be able to use the tools to find out how words are used in your own texts, or those of others. The WordList tool lets you see a list of all the words or word-clusters in a text, set out in alphabetical or frequency order. The concordancer, Concord , gives you a chance to see any word or phrase in context -- so that you can see what sort of company it keeps. With KeyWords you can find the key words in a text. The tools have been used by Oxford University Press for their own lexicographic work in preparing dictionaries, by language teachers and students, and by researchers investigating language patterns in lots of different languages in many countries world-wide. Getting Help Online step-by-step screenshots showing what WordSmith does. Most of the menus and dialogue boxes have help options. You can often get help just by pressing F1 or , or by choosing Help (at the right hand side of most menus). Within a help file (like this one) you may find it easiest to click the Search button and examine the index offered, or else just browse through the help screens. 15 14 17 , or KeyWords . , Concord See also: getting started straight away with WordList Version:6.0 © 2015 Mike Scott December, 2015. Page 2
18 WordSmith Tools Manual Overview Section II
19 4 Overview 2 Overview 2.1 Requirements WordSmith Tools requires 440 1. a reasonably up-to-date computer 440 2. running Windows XP or later 365 to plain text 3. your own collection of text in plain text format or converted 2.2 What's new in version 6 WordSmith is organic software! Version 5.0 was started in June 2007, three years after version 4.0 and has continued this organic policy of growth ever since ... now in 2015 we are at version 6.0 with improvements and new features. New features: 351 · Move files to sub-folders 60 Skins · 128 · Word Clouds 126 · Date handling & Time-lines 365 .docx files · 106 · Scripting 51 · Colour categories 281 · Phrase frames 184 · Collocate following 409 · Chargrams 2.3 Controller This program controls the Tools. It is the one which shows and alters current defaults, handles the choosing of text files, and calls up the different Tools. It will appear at the top left corner of your screen. 127 . You can minimise it, if you feel the screen is getting cluttered December, 2015. Page 4
20 5 WordSmith Tools Manual For a step-by-step view with screenshots, click here to visit the WordSmith website . Concord 2.4 159 Concord is a program which makes a concordance using plain text or web text files. To use it you will specify a search word, which Concord will seek in all the text files you have chosen. It will then present a concordance display, and give you access to information about collocates of the search word. December, 2015. Page 5
22 7 WordSmith Tools Manual 426 are most frequent in a text or a set of A tool to help find out which characters or chargrams texts. The purpose could be to check out which characters or character sequences are most frequent (e.g. E followed by T will be most frequent, in normal English text the letter and ARE will be high THE frequency 3-chargrams), or it could be to check whether your text collection contains any oddities, such as accented characters or curly apostrophes you weren't expecting. 405 See also: Character Profiling CharGrams 2.7.2 426 (sequences of characters) are most frequent in a text or A tool to help find out which chargrams a set of texts. The purpose could be to check out which chargrams are most frequent e.g. in word-initial position, in the middle of a word, or at the end. 409 See also: Chargrams Tool 2.7.3 Choose Languages A tool for selecting Languages which you want to process. You will probably only need to do this once, when you first use WordSmith Tools. 84 See also: Choose Language Tool Corpus Corruption Detector 2.7.4 A tool to go through your corpus and seek out any text files which may have become corrupted. Works in any language. December, 2015. Page 7
23 8 Overview 329 See also: detecting corpus corruption 2.7.5 File Utilities Utilities to 347 compare two files · 348 cut large files into chunks · 348 find duplicate files · 349 · rename multiple files 343 · split large files into their component texts 346 a lot of small text files into merged text files join up · 354 in your text files find holes · Splitter 2.7.5.1 Splitter is a component of File Utilities which splits large files into small ones for text analysis purposes. You can specify a symbol to represent the end of a text (e.g. ) and Splitter will go through a large file copying the text; each time it finds the symbol it will start a new text file. 343 See also: Splitter Help Contents Page File Viewer 2.7.6 A tool for viewing how your text files are formatted in great detail, character by character. 84 See also: File Viewer Index Minimal Pairs 2.7.7 a program to find typos and minimally-differing pairs of words. December, 2015. Page 8
25 10 Overview The various components of WordSmith are listed in the top window and the current version is compared with your present situation. If they are different, all the files in the relevant zip file will be starred (*) in the left margin. By default you will download to wherever WordSmith is already (the program in a program folder and settings etc. in a Documents folder) but you're free to choose somewhere else. Press Download if you wish to get the updated files. December, 2015. Page 10
26 11 WordSmith Tools Manual After the download, the various .zip files are checked (bottom right window) if downloaded successfully, and the Install button is now available for use. Install unzips all those which are checked. 2.7.10 Viewer and Aligner Viewer & Aligner is a utility which enables you to examine your files in various formats. It is called on by other Tools whenever you wish to see the source text. Viewer & Aligner can also be used simply to produce a copy of a text file with numbered sentences 386 381 or paragraphs or for aligning two or more versions of a text, showing alternate paragraphs or sentences of each. 380 See also: Viewer & Aligner Help Contents Page December, 2015. Page 11
27 12 Overview 2.7.11 Webgetter A tool to gather text from the Internet. The point of it... The idea is to build up your own corpus of texts, by downloading web pages with the help of a search engine. 328 325 326 324 , Limitations , Display , Settings See also: A fuller overview WSConcGram 2.7.12 394 . a tool for generating concgrams 393 395 , Running WSConcGram See also : Aims of WSConcGram December, 2015. Page 12
28 WordSmith Tools Manual Getting Started Section III
29 14 Getting Started 3 Getting Started 3.1 getting started with Concord For a step-by-step view with screenshots, visit the WordSmith website . 4 In the main WordSmith Tools window (the one with WordSmith Tools Controller in its title bar), choose the Tools option, and once that's opened up, you'll see the Concord button. Click and the Concord tool will start up. Choose File | New 44 or change your choice, You should now see a dialogue box which lets you choose your texts and make a new concordance, looking somewhat like this: (If you only see the window with Concord in its caption, choose ( ) and the Getting File | New Started window will open up.) December, 2015. Page 14
30 15 WordSmith Tools Manual 446 before you will find a text has been selected for you If you have never used WordSmith automatically to help you get started. 159 and then press OK ( ). You will need to specify a Search-Word or phrase While Concord is working, you may see a progress indicator like this. Here, we have 552 entries so far, and the last one in shows the context for , our search-word. worse 222 , but you can probably leave the default If you want to alter other settings, press Advanced settings as they are. 198 . Concord now searches through your text(s) looking for the search word or Tag 211 Don't forget to save the results (press Ctrl+F2 or ) if you want to keep the concordance for another time. 158 See also: Concord Help Contents . 3.2 getting started with KeyWords For a step-by-step view with screenshots, visit the WordSmith website . 4 In the main WordSmith Tools window (the one with WordSmith Tools Controller in its title bar), choose the Tools option, and once that's opened up, you'll see KeyWords. Click and KeyWords will open up. Choose File | New 232 You see a dialogue box which lets you choose your word-lists . December, 2015. Page 15
31 16 Getting Started You'll need to choose two word lists to make a key words list from: one based on a single text (or single corpus), and another one based on a corpus of texts, enough to make up a good reference corpus for comparison. You will see two lists of the word list files in your current word-list folder. If there aren't any there, go back to the WordList tool and make some word lists. Choose one small word list above, and a 447 reference corpus list below to compare it with. With your texts selected, you're ready to do a key words analysis. Click on make a keyword list now. 123 You'll find that KeyWords starts processing your file and a progress window in the main Controller shows a bar indicating how it's getting on. After KeyWords has finished, it will show you a list of the key words. The ones at the top are more "key" than those further down. December, 2015. Page 16
32 17 WordSmith Tools Manual 101 (press Ctrl+F2) if you want to keep the keyword list for another Don't forget to save the results time. 229 229 See also: KeyWords Help Contents , What's it for? 3.3 getting started with WordList . For a step-by-step view with screenshots, visit the WordSmith website I suggest you start by trying the WordList program. In the main WordSmith Tools window (the one 4 in its title bar), choose the Tools option, and once that's opened with WordSmith Tools Controller up, you'll see WordList. Click and WordList will open up. Choose File | New 44 You will see a dialogue box which lets you choose your texts or change your choice, and make a new word list. December, 2015. Page 17
33 18 Getting Started 446 before you will find a text has been selected for you If you have never used WordSmith automatically to help you get started. There are other settings which can be altered via the menu, but usually you can just go straight 39 ahead and make a new word list, individually or as a Batch . 123 You'll find that WordList starts processing your file(s) and a progress window in the main Controller shows a bar indicating how it's getting on. After WordList has finished making the list, you will see some windows showing the words from your text file in alphabetical order and in frequency 29 order, statistics, filenames, notes . December, 2015. Page 18
34 19 WordSmith Tools Manual 101 Don't forget to save the results (press Ctrl+F2 or ) if you want to keep the word list for another time. 258 See also: WordList Help Contents . December, 2015. Page 19
35 WordSmith Tools Manual Installation and Updating Section IV
40 25 WordSmith Tools Manual 451 9 See also: version information , version updating . December, 2015. Page 25
41 WordSmith Tools Manual Controller Section V
42 27 WordSmith Tools Manual 5 Controller The main WordSmith Controller is a window which holds all the numerous settings and behind the scenes tells each Tool what to do. You can start up only one Controller -- though you can start up numerous Concord windows and WordList windows etc. It is best to leave the Controller in one default position on your screen -- there is no advantage in maximizing its size. December, 2015. Page 27
43 28 Controller 5.1 characters and letters 5.1.1 accents and other characters This window shows a set of the characters available using Unicode. and below, the official name of the character selected. Selecting a character puts it into the clipboard ready to paste. 420 See also: Copying a character into Concord wildcards 5.1.2 Many WordSmith functions allow you a choice of wildcards: symbol meaning examples tele* * disregard the end of the word, *ness disregard a whole word *happi* December, 2015. Page 28
44 29 WordSmith Tools Manual book * hotel Engl??? ? any single character (including ?50.00 punctuation) will match here Engl^^^ ^ a single letter $# # any sequence of numbers, 0 to 9 £#.00 (To represent a genuine #,^,? or * , put each one in double quotes, eg. "?" "#" "^" "*" .) add notes 5.2 As WordSmith generates data, it will state the current relevant settings in the Notes tab and these 211 with your data. In this sample case the original work was done in 2008. In 2009, are saved mutual information was computed on that data, with certain specific settings. You may add to these notes, of course. For example, if you have done a concordance and sorted it 168 carefully using your own user-defined categories , you will probably want to list these and save the information for later use. If you need access to these notes outside WordSmith Tools, select the text using Shift and the 422 cursor arrows or the mouse, then copy it to the clipboard using Ctrl+C and paste into a word processor such as notepad. 5.3 adjust settings 4 . You will see tabs accessing them at There are a number of Settings windows in the Controller the left in he main Controller window. December, 2015. Page 29 45 30 Controller 113 Choose and save settings concerning: 78 · font 60 colours · 431 folders · 131 · tags 80 · general settings 92 match-lists · 120 · stop lists 270 · lemma lists 124 text and language settings · 222 · Concord Settings 254 KeyWords settings · 311 WordList settings · 31 advanced user specific settings · 276 · index file settings December, 2015. Page 30 46 31 WordSmith Tools Manual 5.4 advanced settings These are reached by clicking the Advanced Settings button visible in the Main settings page: and open up a whole new set of options 131 Tags & Markup 306 Lists 310 Index 106 Scripts December, 2015. Page 31 47 32 Controller Help, logging Help system access On a network, it is commonly the case that Microsoft protects users to such an extent that the usual .CHM help files show only their table of contents but no details. Here you can set the at the WordSmith URL. WordSmith help to access the local CHM file or the online Help Logging Logging is useful if you are getting strange results and wish to see details of how they were obtained. If this is enabled, WordSmith will save some idea of how your results are progressing in Advanced Settings | Help | Logging the log-file, which you see in the section in the Controller. Here you can optionally switch on or off logging and choose an appropriate file-name. If you switch it on at any time you will get a chance to clear the previous log-file. This log shows WordSmith working with the Aligner, at the stage where various languages are being loaded up. And here in a Concord process we see some details of the text files being read and processed, December, 2015. Page 32 48 33 WordSmith Tools Manual horrible seeking the search-word : The most straightforward way to use logging is 1. Find logging in the Help tab of Advanced settings. 2. Click the Activated box. You'll be asked whether you want any previous log cleared. 3. Carry on using WordSmith as desired, changing settings or using Concord or any other tool. From time to time or after WordSmith finishes, press the Refresh button visible above and read the output. It is a text file so it can be opened using any word processing software. If you have had trouble, looking at the last few lines may help by showing where processing stopped. If you want to log as WordSmith starts up, start in from the command line with the parameter / : log Start | Run | Cmd < > > | cd\wsmith6 < Enter > | wordsmith6 /log < Enter Enter (or wordsmith6 /log C:\temp\WSLog.txt to force use of C:\temp\WSLog.txt . If you do that, make sure the folder exists first.) December, 2015. Page 33 49 34 Controller 417 . See also: emailed error reports Text Dates Text dates can be set to varying levels of delicacy, depending on the range of text file dates chosen. 126 See also: using text dates Advanced section (menus, clipboard, deadkeys etc.) Customising menus 441 which are used in You can re-assign new shortcuts (such as Alt+F3, Ctrl+O) to the menu items the various Tools. And all grids of data have a "popup menu" which appears when you click the right button of your mouse. To customise this, in the main WordSmith Controller program, choose Main Settings | Advanced | Menus . December, 2015. Page 34 50 35 WordSmith Tools Manual You will see a list of menu options at the left, and can add to (or remove from) the list on the right by selecting one on the left and pressing the buttons in the middle, or by dragging it to the right. To re-order the choices, press the up or down arrow. In the screenshot I've added "Concordance" as I usually want to generate concordances from word-lists and key word lists. 80 Whatever is in your popup menu will also appear in the Toolbar . Below, you see a list of Shortcuts, with Ctrl+M selected. To change a shortcut, drag it up to the Customised menu list or the popup menus and toolbars list. The Restore defaults button puts all shortkeys back to factory settings. To save the choices 113 permanently, see Saving Defaults . Other December, 2015. Page 35 51 36 Controller Here you may press a button to restore all factory defaults, useful if your settings are giving trouble. prompt to save (in general) : reminds you to save every time new data results are computed or re- organised. : (default=false) prompt after WordList or prompt to save concordances computed from other Tools KeyWords or WSConcGram gets a concordance computed. require precomposed characters : some languages have a lot of cases where two characters get merged in the display into one, e.g. . WordSmith will automatically check e with appearing as è for such pairs when processing languages such as Amharic, Arabic, Bengali, Farsi, Gujarati, Hindi, Kannada, Khmer, Lao, Malayalam, Nepali, Oriya, Thai, Tibetan, Telegu, Tamil, Yoruba. If you want to force WordSmith to carry out a test for such pairs when processing all languages, however, check this box. Clipboard Here you may choose defaults for copying. December, 2015. Page 36 52 37 WordSmith Tools Manual 422 The number of characters only applies when copying as editable text. See also: clipboard User .dll If you have a DLL which you want to use to intercept WordSmith's results, you can choose it here. The one this user is choosing, WordSmithCustomDLL.dll , is supplied with your installation and can be used when you wish. If "Filter in Concord" is checked, this .dll will append all concordance lines found in plain text to a file called Concord_user_dll_concordance_lines.txt in your \wsmith6 folder, if there is space on the hard disk. Language Input Deadk eys are used to help type accented characters with some keyboards. The language input tab lets you alter the deadkeys to suit your keyboard and if necessary force WordSmith to use the keyboard layout of your choice whenever WordSmith starts up. December, 2015. Page 37 53 38 Controller Here the user's Windows has four keyboard layouts installed. To type in Maori, you might choose to select Maori, and change a couple of deadkeys. At present, as the list shows, pressing then . A gives À , but users of Maori usually prefer that combination to give A To change these settings, 1. select the line 2. edit the box below: (you can drag the character you need from the 28 character window ) then press Change. When you've changed all the characters you want, press Save. If you want WordSmith to force the keyboard to Maori too every time it starts (this will probably be necessary if it is not a New Zealand computer) then check the always use selected k eyboard box. December, 2015. Page 38 54 39 WordSmith Tools Manual Text Conversion If your text files happen to contain UTF-8 text files, WordSmith will notice and may offer to convert them on the spot using the options below. 441 See also : menu and button options . 5.5 batch processing The point of it... Batch processing is used when you want to make separate lists, but you don't want the trouble of doing it one by one, manually selecting each text file, making the word list or concordance, saving it, and so on. If you have selected more than one text file you can ask WordList, Concord and KeyWords to process as a batch. December, 2015. Page 39 55 40 Controller Folder where they end up 425 The name suggested is today's date . Edit it if you like. Whatever you choose will get created when the batch process starts. The results will be stored in folders stemming from the folder name. That is, if you start making word lists in c:\wsmith\wordlist\05_07_19_12_00, they will end up like this: c:\wsmith\wordlist\05_07_19_12_00\0\fred1.lst c:\wsmith\wordlist\05_07_19_12_00\0\jim2.lst .. c:\wsmith\wordlist\05_07_19_12_00\0\mary512.lst then c:\wsmith\wordlist\05_07_19_12_00\1\joanna513.lst etc. Filenames will be the source text filename with the standard extension (.lst, .cnc, .kws) . Zip them .zip file. You can extract them using If checked, the results are physically stored in a standard your standard zipping tool such as Winzip, or you can let WordSmith do it for you. The files within are exactly the same as the uncompressed versions but save disk space -- and the disk system will also be less unhappy than if there are many hundreds of files in the same folder. If you zip them, you will get c:\wsmith\wordlist\05_07_19_12_00\batch.zip and all the sub-files will get deleted unless you check "keep both .zip and results". One file / One file per folder? The first alternative (default) makes one .zip file with all your individual word-lists in it. Each word-list or concordance or keywords list is for one source text. But what if your text files are structured like this: \..\BNC \..\BNC\written December, 2015. Page 40 56 41 WordSmith Tools Manual \..\BNC\written\humanities \..\BNC\written\medicine \..\BNC\written\science \..\BNC\spoken etc. One file per folder, individual zipfiles The makes a separate .zip of each separate folderful of textfiles (eg. one for humanities, another for medicine, etc.), with one list for each source text. The One file per folder, amalgamated zipfiles makes a separate .zip of each folderful, but makes one word-list or concordance from that whole folderful of texts. Batch Processing and Excel These options may also offer a chance for data to be copied automatically to an Excel file. Faster (Minimal) Processing This checkbox is only enabled if you are about to start a process where more than one kind of result can be computed simultaneously. For example, if you are computing a concordance, by default 207 191 179 will be computed when each concordance is , patterns collocates and dispersion plots 247 251 , link calculations etc. which will done. In KeyWords, likewise, there will be dispersion plots be computed as the KWs are calculated. If checked, only the minimal computation will be done (KWs in KeyWords processing, concordance in ). This will be faster, and you can always get the plots computed later as long as the Concord 430 source texts don't get moved or deleted. : you're making word lists and have chosen 1,200 text files which are from a magazine Example called "The Elephant". You specify C:\WSMITH\WORDLIST\ELEPHANT as your folder name. , you will be asked for permission C:\WSMITH\WORDLIST\ELEPHANT If you already have a folder called December, 2015. Page 41 57 42 Controller to erase it and all sub-folders of it! After you press OK, trunk.LST, tail.LST .. digestive system.LST . They 1,200 new word-lists are created, called are all in numbered sub-folders of a folder called . C:\WSMITH\WORDLIST\ELEPHANT C:\WSMITH\WORDLIST If you did not check "zip them into 1 .zip file", you will find them under \ELEPHANT\0 . If you did check "zip them into 1 .zip file", there is now a C:\WSMITH\WORDLIST\ELEPHANT.ZIP file which contains all your results. (The 1,200 .LST files created will have been erased but the .ZIP file contains all your lists.) .zip file is that it takes up much less disk space and is easy to email to others. The advantage of a WordSmith can access the results from within a .zip file, letting you choose which word list, concordance etc. you want to see. Getting at the results in WordSmith Ch oose File | Open as usual, then change the file-type to "Batch file *.zip". When you choose a .zip file, you will see a window listing its contents. Double-click on any one to open it. Note: of course Concord will only succeed in opening a concordance and KeyWords a key word list file. If you choose a .zip file which contains something else, it will give an error message. 106 See also: batch scripts choosing texts 5.6 This chapter explains how to select texts, save a selection and even attach a date going back as far as 4000BC to each text file. 42 You need text in a suitable format . 5.6.1 text formats 419 444 In WordSmith you need plain text files as Plain , such as you get if you save a Word .doc .txt ). The text format should be ASCII or ANSI or Unicode (UTF16). Text ( files will look crossed out and should not be used: convert them to .txt or .doc Any Word .docx 444 first . 50 display but Files available can be used; they will be coloured red in the Text files within .zip files WordSmith can read them and get the texts you select within them. Why not .PDF files? Don't choose .pdfs either, they have a very special format. Essentially a PDF is a set of December, 2015. Page 42 58 43 WordSmith Tools Manual instructions telling a printer or browser where to place coloured dots. The plain text is usually hard to extract even if you use Adobe Acrobat (and Adobe invented the format). Why not .DOC files? is rather unsuitable even if it does contain the text: this is what a A .DOC containing only the .DOC word hello looks like in Word, then opened up in Notepad, then the .PDF of the same. December, 2015. Page 43 59 44 Controller Check the format is OK In the file-choose window you can test the format of the texts you've chosen with the Test File 46 Format ( ) button. 5.6.2 the file-choose window How to get here 4 This function is accessed from the File menu in the Controller and the Settings menu or New menu item ( ) in the various Tools. December, 2015. Page 44 60 45 WordSmith Tools Manual The two main areas at left and right are · files to choose from (at left) · files already selected (at right) The button which the red arrow points at is what you press to move any you have selected at the left to your "files selected" at the right. Or just drag them from the left to the right. The list on the right shows full file details (name, date, size, number of words (above shown with ?? as WordSmith doesn't yet know, though it will after you have concordanced or made a word list) and whether the text is in Unicode (? for the same reason). To the right of Unicode is a column stating 137 . whether each text file meets your requirements If you have never used WordSmith before (more precisely if you have not yet saved any concordances, word lists etc.) you will find that a chapter from Charles Dickens' Tale of 2 Cities has been selected for you. To stop this happening, make sure that you do save at least one word list or 97 . concordance! See also -- previous lists This puts the current file selection into store. All files of the type you've specified in any sub-folders will also get selected if the "Sub-folders too" checkbox is checked. You can check on which ones have been selected under All Current Settings. December, 2015. Page 45 61 46 Controller Clear As its name suggests, this allows you to change your mind and start afresh. If any selected filenames are highlighted, only these will be cleared. More details File Types The default file specification is *.* (i.e. any file) but this can be altered in the box or set 113 permanently in wordsmith6.ini . Tool In the screenshot above you can see -- we are choosing texts for Concord. There are alternatives available (WordList, KeyWords etc.). Select All Selects all the files in the current folder. Drives and Folders Double-click on a folder to enter it. You can re-visit a folder if its name is in the folder window history list, and easily go back with the standard Windows "back" button . Or click on the button to choose a new drive or folder. Sub-Folders If checked, when you select a whole driveful or a whole folderful of texts at the left, you will select it plus any files in any sub-folders of that drive or folder. Sorting By clicking on the column headers ( Folder, Filename, Size, Type, Words, Unicode, Date etc.) you can re-sort the listing. Test text format This button checks the format of any files selected. In the screenshot above, no tests have been done so the display shows ? for each file. If the text file is in Unicode, the display shows U , if UB , if plain ASCII or Ansi text it will show A , if it's a Word .doc file, Unicode big-endian it'll show D . If it is in UTF-8, 8 . If you get inconsistency you'll be invited to convert them all to Unicode. 50 Favourites 50 Two buttons on the right ( and ) allow you to save or get a previous file selection , saving December, 2015. Page 46 62 47 WordSmith Tools Manual you the trouble of making and remembering a complex set of choices. Type of text files 444 419 In WordSmith you need plain text files , such as you get if you save a Word .doc as .txt ). Any Word .doc files or .pdf s will look crossed out and should not be Plain Text ( 444 . used: convert them to .txt first is greyed out because it has the hidden attribute) ( 10words.TXT Setting text file dates You can edit the textual date to be attached to any text file within any date range from 4000BC upwards. (On first reading from disk the date will be set to the date that text file was last edited.) How to do it December, 2015. Page 47 63 48 Controller Press the button circled in this screen-shot: A window opens up letting you set text file dates and times. Here below you will see Shakespeare plays with their dates being edited. Delicacy offers a choice of various time ranges (centuries, years, etc.) which will help ignore excessive detail. If years are chosen as above, month, day and hour of editing are no longer relevant and default to 1st July at 12:00. If you choose a suitable text file and press the Auto-date button, each of your chosen text files will be updated if its file-name and a suitable date are found in the list. The format of the list is filenamedate (formatted YYYY or YYYY/MM/DD for year, month and day) Examples: A0X 1991 B03 1992/04/17 Here we see BNC text files sorted by date. The ones at the top had no date, then the first (a spoken sermon) dated as 1901, which is when the header says the dated was KNA.XML December, 2015. Page 48 64 49 WordSmith Tools Manual tape-recording was made(!). Your %USER_FOLDER% folder includes an auto-date file for the BNC ( BNC dates ) and another for the Shakespeare corpus ( Shakespeare plays dated plain ). 352 There is also a utility in the File Utilities which can parse text files to generate dates using your own syntax, preparing a text file like this to read in. 50 You can save the dates and files as favourites so as to re-use this information as often as you like. 126 See also: using text dates Advanced Opens a toolbar showing some further buttons: The buttons at the top left let you see the files available as icons, as a list, or with full details (the default) instead. Random This re-orders the files (on both sides) in random order. View in Notepad Lets you see the text contents in the standard Windows simple word-processor for text files, Notepad. Get from Internet 12 so as to download text from the Internet. Allows you to access WebGetter Check Checks whether the files selected are available to read (e.g. after loading up Favourites). December, 2015. Page 49 65 50 Controller Save List Lets you save any already stored text files as a plain text list (e.g for adding date information). Zip files If checked, when loading up a whole folder of text files, WordSmith will automatically include ones from .zip files. 453 Whether checked or not, if you double-click on a zip file you can enter that as if it were a folder and see the contents. Zip files will be coloured red. In this screen, the historical plays of Shakespeare within a zip file ( plays.zip ) have been selected. 430 116 , Finding source texts . See also : Step-by-step online example , Viewing source texts 5.6.3 favourite texts save favourites Used to save your current selection of texts. Useful if it's complex, e.g. involving several different folders. Essential if you've attached a date to your text files. Saves a list of text files whose status is either unknown or known to meet your requirements when 137 , ignoring any which do not. selecting files by their contents get favourites Used to read a previously-saved selection from disk. By default the file name will be the name of the tool you're choosing texts for plus recent_chosen_text_files.dat , in your main WordSmith folder. December, 2015. Page 50 66 51 WordSmith Tools Manual ) a set of choices you have edited using Notepad, but You may use a plain text file for loading ( note that each file needed must be fully specified: wildcards are not used and a full drive:\folder path is needed. You may date the text file if you like by appending to the file-name a character followed by the date (any date after 1000BC) in the format yyyy/mm/dd e.g. -399/07/01 c:\text\socrates.txt c:\text\hamlet.txt 1600/07/01 c:\text\second world war.txt 1943/05/22 48 44 See also: Choosing Texts , file dates choosing files from standard dialogue box 5.7 44 ; it also allows you to The dialogue box here is very similar to the one used for choosing text files 453 choose from a zip file . 379 to examine a file: this makes no sense in the case of a word list, You can use Viewer & Aligner key word list, or concordance, but may be useful if you need to examine a related text file, e.g. a readme.txt in the same zip file as your concordance or word lists. To choose more than one file, hold the Control key down as you click with your mouse, to select as files as you want. Or hold down the Shift key to select a whole range of them. many separate 5.8 class or session instructions When WordSmith is run in a training session, you may want to make certain instructions available to your trainees. teacher.rtf in your main To do this, all you need to do is ensure there is a file called folder where the WordSmith programs are or in the "instructions folder" explained under \wsmith6 23 . If one is found, it will be shown automatically when WordSmith starts up. site licence defaults To stop it being shown, just rename it! You edit the file using any Rich Text Format word processor, such as MS Word™, saving as an file. .rtf 23 See also: Site licence defaults 5.9 colour categories The point of it ... With a concordance or word list on your screen it can be hard for example to know how many of the thousands of entries met certain criteria. For example which ones derived from only a few texts? mytext.txt ? How many of the concordance lines came both from Which ones ended in -NESS and from the first 40 words in the sentence, and which ones are they? 315 your existing data by your own criteria. (Since last millennium The idea is to let you re-sort 168 column WordSmith lists have been sortable by standard criteria, and there has long been a Set for your own classification, but this feature makes it possible to have multiple and complex sorts.) December, 2015. Page 51 67 52 Controller How to do it The menu option Compute | Colour categories will be found if the data have a Set column. The menu option brings up a window where you specify your search criteria. Here is an example: December, 2015. Page 52 68 53 WordSmith Tools Manual Complete the form by choosing a data column (above the user chose the File column) and a below which will mean 'search the file column seeking any condition (here X ends with Y and .txt where the File ends in .txt'). Then choose a colour (here colour 67 was chosen) and then press Add a search . Finally, press Find . As you can just see, the Set column in the concordance has some items coloured. A more complex example: December, 2015. Page 53 69 54 Controller where the user wants to process the Word column of data, looking for a condition where the word starts with UN occurs at least 5 times. For any word which meets this condition, the Set and column will show the colour selected. When you have specified the criteria, press the Find button. December, 2015. Page 54 70 55 WordSmith Tools Manual The top of the Colour Categories window shows the percentage results. In the example below, the user has decided to omit their first search and to carry out another on the same word-list which found 188 words ending in NESS which were present in more than 40 BNC texts. But not This option lets you have a negative condition. December, 2015. Page 55 71 56 Controller Where are they in the list? To locate the items which colour categorising has found, simply sort the Set column. (If it's a Freq. list you may have to go to the Alphabetical tab first.) The categorised items float to the top. Here, BE the 6 words between with frequency above 5 are coloured green at the top of the word and BF list, with the 13 NESS items with frequency less than or equal to 5 coloured blue. December, 2015. Page 56 72 57 WordSmith Tools Manual Once sorted, the data can be saved as before. What if I already have a Set classification? Here is a concordance where the exclamation O or Ah had already been identified and marked in the Set column. December, 2015. Page 57 73 58 Controller As the Set column is already in use, classifying further by colour will take second priority to the existing forms typed in. So in this case: where 58 cases were found where the exclamations came in the first 49% of the text, we see that line 10 goes green (11 did not go green because the criterion was less than 50 and it had exactly 50%) but clicking the Set column gives priority to the exclamation typed in. December, 2015. Page 58 74 59 WordSmith Tools Manual s follow the uncoloured ones. In this case the O s follow the Ah s, and the coloured Ah Removing the colours? Use the Clear colours button. What if more than one condition is met? If you colour words ending NESS blue, and also colour words starting UN yellow, any word meeting both conditions will get a mixture of the two colours as shown here: December, 2015. Page 59 75 60 Controller 171 168 , colour categories for concordances See also : setting categories by typing colours 5.10 4 . Enables you to Found in main Settings menu in all Tools and Main Settings in the Controller choose your default colours for all the Tools. Available colours can be set for plain text this is the default colour as above when selected highlighted text 198 mark-up tags 159 concordance search word; words in (key) word lists search word 315 main sort word indicates first sort preference; used for % data in (key) word lists second sort word indicates first tie-breaker sort colour context word context word any line of deleted data deleted words any line which has not been user-sorted not numbered line concordance search word when selected search word highlighted main sort word first sort when selected highlighted first tie-breaker sort when selected second sort word highlighted context word context word when selected highlighted most frequent p value most frequent collocate or detailed consistency word, collocate 235 in keywords 11 viewing texts in the text viewer lemma colour colour of lemmas shown in lemma window word-cloud shape see word clouds section below word-cloud window word-cloud word December, 2015. Page 60 76 61 WordSmith Tools Manual Overall colour scheme This allows a range of colour scheme choices, which will affect the colours of all WordSmith windows. List colours To alter colours, first click on the wording you wish to change (you'll see a difference in the left margin: here search word has been chosen), then click on a colour in the colour box. The radio buttons determine whether you're changing foreground or Foreground and Back ground background colours. You can press the Reset button if you want to revert to standard defaults. The same colours, or equivalent shades of grey, will appear in printouts, or you can set the printer 80 to black and white, in which case any column not using "plain text" colour will appear in italics (or bold or underlined if you have already set the column to italics). The Reset button lets you restore colours to factory defaults. Ruler This opens another dialogue window, in which you can set colours and plot divisions for the ruler: December, 2015. Page 61 77 62 Controller Word Clouds These settings allow to to choose how each word will be displayed, e.g. within rectangles or circles. The colours of the words and the word cloud window are set in the List colours section above. 87 See also: Column Layout for changing the individual colours of each column of data, Colour 51 128 . Categories , Word Clouds 5.11 column totals The point of it... This function allows you to see a total and basic statistics on each column of data, if the data are numerical. How to do it With a word-list, concordance or key-words list visible, choose the menu item View | Column Totals to switch column totals on or off. December, 2015. Page 62 78 63 WordSmith Tools Manual Here we see column totals on a detailed consistency list based on Shakespeare's plays. The list itself is sorted by the Texts column: the top items are found in all 35 of the plays used for the list. In the case of Anthony and Cleopatra, A represents 1.28% of the words in that column, that is 1.28% this is the highest percentage in of the words of the play Anthony and Cleopatra. In the case of ACT its row (this word is used more in percentage terms in that play than in the others). 102 See also: save as Excel 5.12 compute new column of data The point of it... This function brings up a calculator, where you can choose functions to calculate values which interest you. For example, a word list routinely provides the frequency of each type, and that frequency as a percentage of the overall text tokens. You might want to insert a further column showing the frequency as a percentage of the number of word types, or a column showing the frequency as a percentage of the number of text files from which the word list was created. December, 2015. Page 63 79 64 Controller This word-list has a column which computes the cumulative scores (running total of the % column). How to do it and create your own formula. You'll see standard calculator Just press Compute | New Column buttons with the numbers 0 to 9, decimal point, brackets, 4 basic functions. To the right there's a list of standard mathematical functions to use (pi, square root etc.): to access these, double-click on them. Below that you will see access to your own data in the current list, listing any number-based column-headings. You can drag or double-click them too. December, 2015. Page 64 80 65 WordSmith Tools Manual Absolute and Relative Your own data can be accessed in two ways. A relative access (the default) means that as in a spreadsheet you want the new column to access data from another column but in the same row. Absolute access means accessing a fixed column and row. Examples you type Result -- for each row in your data, the new column will contain: Rel(2) ÷ 5 the data from column 2 of the same row, divided by 5 the data from column 2 of the same row, added to a running RelC(2) total the data from column 2 of the same row, divided by 5, added to Rel(3) + (Rel(2) ÷ 5) the data from column 3 of the same row Abs(2;1) ÷ 5 the data from column 2 of row 1, divided by 5. (This example is just to illustrate; it would be silly as it would give the exact same result in every row.) Rel(2) ÷ Abs(2;1) × 100 the data from column 2 of the same row divided by column 2 of row 1 and multiplied by 100. This would give column 3 as a percentage of the top result in column 2. For the first row it'd give 100%, but as the frequencies declined so would their December, 2015. Page 65 81 66 Controller percentage of the most frequent item. 87 this way: see layout You can format (or even delete) any variables computed in . 62 66 51 See also: count data frequencies , colour categories , column totals 5.13 copy your results The quickest and easiest method of copying your data e.g. into your word processor is to select 422 (click to see with the cursor arrows and then press Ctrl+C. This puts it into the clipboard examples showing how to copy into Word etc.). you get various choices: If you choose File | Save As 102 saving as a text file or XML or spreadsheet 101 as data (not the same as saving as text: this is saving so you can access your data again save another day) 97 101 422 , clipboard , printing See also: saving count data frequencies 5.14 In various Tools you may wish to further analyse your data. For example with a concordance you may want to know how many of the lines contain a prefix like un- or how many items in a word-list end in - ly . To do this, choose Summary Statistics in the Compute menu. Load This allows you to load into the searches window any plain text file which you have prepared previously. For complex searching this can save much typing. An example might be a list of suffixes or prefixes to check against a word list. Search Column This lets you choose which column of data to count in. It will default to the last column clicked for your data. Breakdown by If activated this lets you break down results, for example by text file. See the example from Concord 215 . Cumulative Column December, 2015. Page 66 82 67 WordSmith Tools Manual 304 This adds up values from another column of data. See the example from WordList . 215 See also: distinguishing consequence from consequences , frequencies of suffixes in a word list 63 304 , compute new column of data . custom processing 5.15 416 , is not for those without a tame programmer to help -- is found This feature -- which, like API . under Main Settings | Advanced The point of it... I cannot know which criteria you have in processing your texts, other than the criteria already set up (the choice of texts, of search-word, etc.) You might need to do some specialised checks or formats. For example, you might need to WordSmith alteration of data before it enters the lemmatise a word according to the special requirements of your language. This function makes that possible. If for example you have chosen to filter concordances, as Concord processes your text files, every time it finds a match for your search-word, it will call your .dll file. It'll tell your own .dll what it has found, and give it a chance to alter the result or tell Concord to ignore this one. How to do it... .dll Choose your file (it can have any filename you've chosen for it) and check one or more of the options in the Advanced page. You will need to call standard functions and need to know their names and formats. It is up to you to write your own .dll program which can do the job you want. This can be written in any programming language (C++, Java, Pascal, etc.). An example for lemmatising a word in WordList The following DLL is supplied with your installation, compiled & ready to run. Your .dll needs to contain a function with the following specifications function WordlistChangeWord( original : pointer; language_identifier : DWORD; is_Unicode : WordBool) : pointer; stdcall; The language_identifier is a number corresponding to the language you're working with. See List of . Locale ID (LCID) Values as Assigned by Microsoft So the "original" (sent by WordSmith) can be a PCHAR (7 or 8-bit) or a PWIDECHAR (16-bit Unicode) and the result which your .dll supplies can point to a) nil (if you simply do not want the original word in your list) b) the same PCHAR/PWIDECHAR if it is not to be changed at all December, 2015. Page 67 83 68 Controller c) a replacement form Here's an example where the source text was Today is Easter Day. Source code The source code for the .dll in Delphi is this library WS5WordSmithCustomDLL; uses Windows, SysUtils; { This example uses a very straightforward Windows routine for comparing strings, CompareStringA and CompareStringW which are in a Windows .dll. The function does a case-insensitive comparison because NORM_IGNORECASE (=1) is used. If it was replaced by 0, the comparison would be case-sensitive. In this example, EASTER gets changed to CHRISTMAS. } function WordlistChangeWord( original : pointer; language_identifier : DWORD; is_Unicode : WordBool) : pointer; stdcall; begin Result := original; if is_Unicode then begin if CompareStringW( language_identifier, NORM_IGNORECASE, PWideChar(original), -1, December, 2015. Page 68 84 69 WordSmith Tools Manual PWideChar(widestring('EASTER')), -1) - 2 = 0 then Result := pwidechar(widestring('CHRISTMAS')); end else begin if CompareStringA( language_identifier, NORM_IGNORECASE, PAnsiChar(original), -1, PAnsiChar('EASTER'), -1) - 2 = 0 then Result := pAnsichar('CHRISTMAS'); end; end; function ConcordChangeWord( original : pointer; language_identifier : DWORD; is_Unicode : WordBool) : pointer; stdcall; begin Result := WordlistChangeWord(original,language_identifier,is_unicode); end; function KeyWordsChangeWord( original : pointer; language_identifier : DWORD; is_Unicode : WordBool) : pointer; stdcall; begin Result := WordlistChangeWord(original,language_identifier,is_unicode); end; { This routine exports each concordance line together with the filename it was found in a number stating how many bytes into the source text file the entry was found its hit position in that text file counted in characters (not bytes) and the length of the hit-word (so if the search was on HAPP* and the hit was HAPPINESS this would be 9) This information is saved in Unicode appended to your results_filename } function HandleConcordanceLine (source_line : pointer; hit_pos_in_characters, hit_length : integer; byte_position_in_file, language_id : DWORD; is_Unicode : WordBool; source_text_filename, results_filename : pwidechar) : pointer; stdcall; function extrasA : ansistring; begin Result := #9+ ansistring(widestring(pwidechar(source_text_filename)))+ #9+ ansistring(IntToStr(byte_position_in_file))+ #9+ ansistring(IntToStr(hit_pos_in_characters))+ #9+ ansistring(IntToStr(hit_length)); end; function extrasW : widestring; begin December, 2015. Page 69 85 70 Controller Result := #9+ widestring(pwidechar(source_text_filename))+ #9+ IntToStr(byte_position_in_file)+ #9+ IntToStr(hit_pos_in_characters)+ #9+ IntToStr(hit_length); end; const bm: char = widechar($FEFF); var f : File of widechar; output_string : widestring; begin Result := source_line; if length(results_filename)>0 then try AssignFile(f,results_filename); if FileExists(results_filename) then begin Reset(f); Seek(f, FileSize(f)); end else begin Rewrite(f); Write(f, bm); end; if is_Unicode then output_string := pwidechar(source_line)+extrasW else output_string := pAnsichar(source_line)+widestring(extrasA); if length(output_string) > 0 then BlockWrite(f, output_string[1], length(output_string)); CloseFile(f); except end; end; exports ConcordChangeWord, KeyWordsChangeWord, WordlistChangeWord, HandleConcordanceLine; begin end. 416 133 See also : API , custom settings editing 5.16 5.16.1 reduce data to n entries With a very large word-list, concordance etc., you may wish to reduce it randomly (eg. for sampling). This menu option ( Edit | Deleting | Reduce to N ) allows you to specify how many entries 129 until there you want to have in the list. If you reduce the data, entries will be randomly zapped are only the number you want. The procedure is irreversible. That is, nothing gets altered on disk, but if you change your mind you will have to re-compute or else go back to an earlier saved version. December, 2015. Page 70
87 72 Controller and if you double-click any of these you may edit it to change the column header as in this (absurd) example: If you now save your word-list, the new column heading gets saved along with the data. Other new word-lists, though, will have the default WordSmith headings. If you want all future word-lists to have the same headings, you should press the Save button in the 87 layout window . (If you had been silly enough to call the word column "Ulan Bator" and to have saved this for all subsequent word-lists, you could remedy the problem by deleting Documents\wsmith6 \wordlist list customised.dat .) 5.16.6 editing a list of data With a word list on screen, you might see something like this. December, 2015. Page 72
88 73 WordSmith Tools Manual In the status bar at the bottom, the number in the first cell is the number of words in the current word list and AA in the third cell is the word selected. At the moment, when the user types anything, WordList will try to find what is typed in the list. If you right-click the second cell you will see and can change the options for this list to Set (to classify your words, eg. as adjectives v. nouns) or Note that some of the data is calculated using other data and therefore cannot Edit , to alter them. be edited. For example, frequency percentage data is based on a word's frequency and the total number of running words. You can edit the word frequency but not the word frequency percentage. Choose Edit. Now, in the column which you want to edit, press any letter. This will show the toolbar (if it wasn't visible before) so you can alter the form of the word or its frequency. If you spell the word so that it matches another existing word in the list, the list will be altered to reflect your changes. . In this case we want to correct AACUTE , which should be Á December, 2015. Page 73
89 74 Controller If you now type Á , you will immediately see the result in the window: Clicking the downward arrow at the right of the edit combobox, you will see that the original word is there just in case you decide to retain it. 441 After editing you may want to re-sort ( ), and if you have changed a word such as AAAAAGH to 270 a pre-existing word such as , to join the two entries. AAGH 430 270 . also: joining entries See , finding source files December, 2015. Page 74
90 75 WordSmith Tools Manual 5.17 find relevant files The point of it... Suppose you have identified muscle, fibre, protein as key words in a specific text. You might want to find out whether there are any more texts in your corpus which use these words. How to do it This function can be reached in any window of data which contains the menu option, e.g. a word- 6 list or a key words list . It enables you to seek out all text files which contain mention of the words you have at least one 44 choose the set of texts marked or selected. Before you click, you want to peruse. (If you which haven't, the function will let you use the text(s) the current key words or word-list entries are based on.) December, 2015. Page 75
91 76 Controller 112 Here we have a keywords list based on a Chinese folk tale with two items chosen by marking . The text files to examine in this case are all the Shakespeare tragedies... What you get 430 A display based on all the words you marked, showing which text files they were found in and how many of each word were found. If you double-click as shown December, 2015. Page 76
92 77 WordSmith Tools Manual you'll get to see the source text and can examine each of the words, in this case the four tokens of the type dream . December, 2015. Page 77
93 78 Controller 5.18 folder settings These are found in the main Controller. The settings folder will be default be a sub-folder of your My Documents folder but it can be set elsewhere if preferred. 5.19 fonts 4 Found by choosing Settings | Font in all Tools or via Language Settings in the Controller . Enables you to choose a preferred Windows font and point size for the display windows and printing 97 81 in all the WordSmith Tools suite. Note that each language can have its own different default font. December, 2015. Page 78
94 79 WordSmith Tools Manual If you have data visible in any Tool, the font will automatically change; if you don't want any specific windows of data to change, because you want different font sizes or different character sets in different windows, minimise these first. 87 To set a column of data to bold, italics, underline etc., use the layout option . 81 WordSmith Tools will offer fonts to suit the language chosen in the top left box. Each language may require a special set of fonts. Language choice settings once saved can be seen (and altered, with care) in Documents\wsmith6\language_choices.ini . December, 2015. Page 79
95 80 Controller 5.20 main settings 4 Found in in the WordSmith Tools Controller . Main settings Startup Restore last work will bring back the last word-list, concordance or key-words list when you start WordSmith. Show Help file will call up the Help file automatically when you start WordSmith. Associate/clear file extensions will teach Windows to use (or not to use) Concord, WordList, KeyWords etc. to open the relevant files made by WordSmith. Check for updates December, 2015. Page 80
96 81 WordSmith Tools Manual WordSmith can be set to check for updated versions weekly, monthly or not at all. You may freely update your version within the version purchased (e.g. 6.0 allows you to update any 6.x version until 7.0 is issued). Toolbar & Status bar Each Tool has a status bar at the bottom and a toolbar with buttons at the top. By default the toolbar is hidden to reduce screen clutter. System The first box gives a chance to force the boxes which appear for choosing a file to show the files in various different ways. For example "details" will show listings with column headers so with one click you can order them by date and pick the most recent one even if you cannot remember the exact filename. The Associate/clear file extensions button will teach Windows to use (or not to use) Concord, WordList, KeyWords etc. to open the relevant files made by WordSmith. 5.21 language The point of it ... 1. Different languages sometimes require specific fonts. 2. Languages vary considerably in their preferences regarding sorting order. Spanish, for example, uses this order: A,B,C,CH,D,E,F,G,H,I,J,K,L,LL,M,N,Ñ,O,P,Q,R,S,T,U,V,W,X,Y,Z. And accented characters are by default treated as equivalent to their unaccented counterparts in some languages donne, donné, données, donner, donnez , etc.) but in other languages (so, in French we get accented characters are not considered to be related to the unaccented form in this way (in Czech we get cesta .. cas .. hre .. chodník ..) Sorting is handled using Microsoft routines. If you process texts in a language which Microsoft haven't got right, you should still see word-lists in a consistent order. Note that case-sensitive means that Mother will come after mother (not before apple or after zebra ). It is important to understand that a comparison of two word-lists (e.g. in KeyWords) relies on sort order to get satisfactory results -- you will get strange results in this if you are comparing 2 word- lists which have been declared to be in different languages. Settings December, 2015. Page 81
97 82 Controller 4 under . The Language Settings Choose the language for the text you're analysing in the Controller 419 language and character set must be compatible, e.g. English is compatible with Windows Western (1252), DOS Multilingual (850). WordSmith Tools handles a good range of languages, ranging from Albanian to Zulu. Chinese , Japanese , Arabic etc. are handled in Unicode. You can view word lists, concordances, etc. in different languages at the same time. 124 Characters within word Hyphens separate words, Numbers, 78 Font 124 Text Format How more languages are added Press the button. Edit Languages 419 420 , Processing text in Chinese , Accented characters See also: Choosing Accents & Symbols 146 419 , Changing language etc., Text Format December, 2015. Page 82
98 83 WordSmith Tools Manual 5.21.1 Overview You will probably only need to do this once, when you first use WordSmith Tools. How to get here The Language Chooser is accessed from the main WordSmith Controller menu: Settings | Main Settings | Text and Languages | Other Languages . What you will see may look like this: December, 2015. Page 83
99 84 Controller 9 languages have been chosen already. At the bottom you will see what the fonts on your system are for the current language selected. 86 86 86 87 84 , , saving your choices , Sort Order , Font See also : Language , Other Languages 419 Changing the language of saved data Language Chooser 5.21.2 How to get here The Language Chooser is accessed from the main WordSmith Controller: December, 2015. Page 84
100 85 WordSmith Tools Manual What it does The list of languages on the left shows all those which are supported by the PC you're using. If any of them are greyed, that's because although they are "supported" by your version of Windows, they haven't been installed in your copy of Windows. (To install more multilingual support, you will need your original Windows cdrom or may be able to find help on the Internet.) On the right, there are the currently chosen languages for use with WordSmith. The default December, 2015. Page 85
103 88 Controller Layout or Add data? The Layout Add a column of data lets you tab gives you a chance to format the layout of your data. 63 compute a new variable . 71 by double-clicking and typing in your own preferred heading. You can edit the headings "Frequency in the text" is too long but serves to illustrate. Move Click on the arrows to move a column up or down so as to display it in an alternative order. Alignment Allows a choice of left-aligned, centred, right-aligned, and decimal aligned text in each column, as appropriate. Typeface Normal, bold, italic and/or underlined text. If none are checked, the typeface will be normal. Screen Width in your preferred units (cm. or inches). Here 3 of the headings have been activated (by clicking) so that settings can be changed so as to get them all the same width. December, 2015. Page 88
104 89 WordSmith Tools Manual Case lower case, UPPER CASE, Title Case or source: as it came originally in the text file. The default for most data is upper case. Decimals the number of decimal places for numerical data, where applicable. For example, suppose you have this list of the key words of Midsummer Night's Dream in view but want to show the numbers in the column above 0.02, corresponding to WALL, FAIRY etc., select the column(s) you want to affect, and set the decimals eg. like this December, 2015. Page 89
105 90 Controller where the top number is the decimal places (2, unchanged from the default for percentage data) and the bottom is the threshold below which the data are not shown. In this case, any date smaller than 0.0001 won't be shown (the space will be blank). As soon as you make the change, you should immediately see the result. Visibility show or hide, or show only if greater than a certain number. (If this shows ***, then this option is not applicable to the data in the currently selected column.) Colours The bottom left window shows the available colours for the foreground & background. Click on a colour to change the display for the currently selected column of information. Restore Restores settings to the state they were in before. Offers a chance to delete any custom saved layout for the current type of data (see Save). Save The point of this Save option is to set all future lists of the same type as the one you're working on to a preferred layout. Suppose you have a concordance open. If you change the layout as you like 101 and save the concordance in the usual way it will remember your settings anyway. But the next time you make a concordance, you'll get the WordSmith default layout. If you choose this Save, the next time you make a concordance, it will look like the current one. And a custom saved layout will be found in your Documents\wsmith6 folder, eg. Concordance list customised.dat. ( The only way of removing such settings would be to rename or delete that file.) Alternatively you can choose always to show or hide certain columns of data with settings. For example, in the Controller's Concord settings the What you see tab offers these options, December, 2015. Page 90
106 91 WordSmith Tools Manual which can be saved permanently with . Freeze the columns If you have a lot of detailed consistency files and wish to freeze the word column so as to see the words for every column of numbers, choose View | Freeze columns... This allows you to set the number of fixed columns for example to 2, and the display will look as it does here: where the N and Word columns are both frozen (and cannot be re-sorted) allowing you to look at the 679th and 680th text file data. Similarly a statistics list allows the text file-names column to be frozen: December, 2015. Page 91
107 92 Controller 60 113 4 choices in WordSmith Tools Controller See also: setting & saving defaults , setting colour . 5.23 match words in list The point of it... This function helps you filter your listing. You may choose to relate the entries in a concordance or list of words (word-list, collocate list, etc.) with a set of specific words which interest you. For example, to mark all those words in your list which are function words, or all those which end in - ~ ing . Those which match are marked with a tilde ( ). With the entries marked, you can then choose to delete all the marked entries (or all the unmarked ones), or sort them according to whether they're marked or not. How to do it: WordList example With a word-list loaded up using WordList, click in the column whose data you want to match up. This will usually be one showing words, not numbers. Then choose Compute | Matches . If you have no suitable match-list settings, you may get this: The main Controller settings dialogue box appears. December, 2015. Page 92
108 93 WordSmith Tools Manual The circled areas show some of the main choices: make sure you are choosing for the right Tool, and if matching words from a text file, browse to find it and then press Load to load its words. You must of course decide what is to be done with any matching entries. Text File or Template Choose now whether you want to filter by using a text file which contains all the words you're interested in (e.g. a plain text file of function words [not supplied]) or a template filter such as *ing .). (which checks every entry to see whether it contains a word ending in ing To use a match list in a file, you first prepare a file, using or any plain text word processor, Notepad which specifies all the words you wish to match up. Separate each word using commas, or else place each one on a new line. You can use capital letters or lower-case as you prefer. You can use a semi-colon for comment lines. There is no limit to the number of words. Example ; Match list for test purposes. THE,THIS,IS IT WILL *ING If you choose a file, the Controller will then read it and inform you as to how many words there are in December, 2015. Page 93
109 94 Controller it. (There is no limit to the number of words but only the first 50 will be shown in the Controller.) Action The current Tool then checks every entry in the selected column in your current list to see whether it matches either the template or one of the words in your plain text file. Those which do match are marked or deleted as appropriate for the Action requested (as in the example below where five delete entries which match matching entries were found, the action selected was and the match list included THE , IS and IT ). I answered No so you could see this result: December, 2015. Page 94
110 95 WordSmith Tools Manual In the screenshot below, the action was find matches & mark them , and the match-list contained archaic forms like thou, thee, thy . December, 2015. Page 95
111 96 Controller The marking can be removed using a menu option or by re-running the match-list function with remove match mark ing as the action. 66 You can obtain statistics of the matches, using the Summary Statistics menu option. 263 120 260 274 , Lemma Matching , Stop Lists See also: Comparing Word-lists , Comparing Versions December, 2015. Page 96
112 97 WordSmith Tools Manual 5.24 previous lists These windows show the lists of results you have obtained in previous uses of WordSmith. To see any of these, simply select it and double-click -- the appropriate Tool will be called up and the data shown in it. The popup menu for the window is accessed by a right-click on your mouse. . To delete an entry, select it and then press Del To re-sort your entries click the header or choose Resort in the popup menu . 5.25 print and print preview Print settings are in the main Controller: December, 2015. Page 97
113 98 Controller Print Settings If you set printing to monochrome, your printer will use italics or bold type for any columns using 60 other than the current "plain text" colour . Otherwise it will print in colour on a colour printer, or in shades of grey if the printer can do grey shading. You can also change the units, adjust orientation (portrait or landscape ) and margins and default header and footer. When you choose a print or print preview menu item in a Tool, you'll be taken by default to a print preview, which shows you what the current page of data looks like, and from which you can print. December, 2015. Page 98
114 99 WordSmith Tools Manual Bigger and Smaller Zoom to 100% ( ) or fit to page ( ), or choose a view in the list. The display here works in exactly the same way as the printing to paper. Any slight differences between what you see and what you get are due to font differences. You can also pull the whole print preview window larger or smaller. Next ( ) & Last ( ) Page Takes you forward or back a page. ) or Landscape ( )? Portrait ( Sets printing to the page shape you want. Header, Footer, Margins You can type a header & footer to appear on each page. Press Show if you want them included. If you include this will put today's date and does the numbering. Margins are altered by clicking the numbers -- you will see the effect in the print previews space at the right. ) Print ( December, 2015. Page 99
115 100 Controller This calls up the standard Windows printer page and by default sets it to print the current page. You can choose other pages in this standard dialogue box if you want. Some columns of data not shown A case like this showing nothing but the line numbers is because you have pulled the concordance data too wide for the paper. WordSmith 87 prints only any columns of data which are going to fit. Shrink the column, hide any unwanted ones, or else set the print surface to landscape. December, 2015. Page 100
118 103 WordSmith Tools Manual The words are visible from row 18 onwards; above them we get some summary data. The 1/8, 2/8 etc. section splits the data into eighths; thus 100% of the Texts data (column E) is in the 8th section, whereas nearly all the data (98.8%) are in the smallest section in terms of word frequency, because so many words come once only. You'll be asked whether to compute this summary data if you choose to save as Excel. In the case of a concordance line, saving as text will save as many "characters in 'save as text'" as 222 you have set (adjustable in the Controller Concord Settings ). The reason for this is that you will probably want a fixed number of characters, so that when using a non proportional font the search- December, 2015. Page 103
119 104 Controller 211 words line up nicely. See also: Concord save and print . Each worksheet can only handle up to 65,000 rows and 256 columns. If necessary there will be continuation sheets. If your data contains a plot you will also get another worksheet in the Excel file, looking like this. 441 The plot data are divided into the number of segments set for the ruler (here they are eighths), and the percentage of each get put into the appropriate columns. That is, cell B3 means that 23.7% of the cep.txt data come in the first eighth of the text file. Set the format correctly as percentages in Excel, and you will see something like this: At the top you get the raw data, which you can use Excel to create a graphic with. December, 2015. Page 104
120 105 WordSmith Tools Manual If you want access to the details of the plot, choose save as text. The results will look like this: and you can then process those numbers in another program of your choice. ), you get a little .HTM file and a large .XML file. Click on the .HTM file In the case of XML text ( and you can see your data a page at a time, with buttons to jump forwards or back a page, as well as to the first and last pages of data. This accesses your .XML file to read the data itself. 39 See also: Excel Files in batch processing December, 2015. Page 105
121 106 Controller 5.28 scripting Scripts This option allows you to run a pre-prepared script. In the case below, sample_script.txt has requested two concordance operations, a word list, and a keywords analysis. The whole process happened without any intervention from the user, using the defaults in operation. file in The syntax is as suggested in the EXAMPLES visible above. (There is a sample_script.txt your Documents\wsmith6 folder). First the tool required, then the necessary parameters, each surrounded by double quotes, in any order. concord corpus="x:\text\dickens\hard_times.txt" node="hard" output="c: \temp\hard.cnc" made a concordance of the hard_times.txt text file looking for the search-word hard and saved results in c:\temp\hard.cnc concord corpus="x:\text\dickens\hard_times.txt" node="c:\temp\sws.txt" output="c:\temp\outputs.txt" 1_at_a_time="true" file, made a concordance of the same text file looking for each search-word in the sws.txt December, 2015. Page 106
122 107 WordSmith Tools Manual c:\temp\outputs.txt counted the number of hits and saved results in wordlist corpus="x:\text\shakes\oll\txt\tragedies\*.txt" output="c:\temp \shakespeare.lst" made a word list of all the .txt text files in a folder of Shakespeare tragedies (not including sub- folders) and saved it. keywords refcorpus="j:\temp\BNC.lst" wordlist="c:\temp\shakespeare.lst" output="j:\temp\shakespeare.kws" made a key words list of that word list compared with a BNC word list and saved it. Two additional optional parameters not visible there are: TXT_format="true" . and 1_at_a_time="true" TXT_format is true, a Concord file will contain only the concordance lines, a KeyWords file only If the key words and their frequencies, and a WordList file only the words and their frequencies. If is true, a word-list will export separate results text file by text file. 1_at_a_time 1_at_a_time If is true, Concord will read search words from a text file and save summary results: concord corpus="x:\text\dickens\hard_times.txt" node="c:\temp\sws.txt" output="c:\temp\outputs.txt" 1_at_a_time="true" produced this in : c:\temp\outputs.txt x:\text\dickens\hard_times.txt hard 50 3 soft mean 54 9 empty fred 0 book 13 4 north* 2 south* collocate scripts It is also possible to run a script requesting the collocates of each word in a word-list. This syntax wordlist collocates of "c:\temp\shakespeare.lst" output="c:\temp \shakespeare\collocates" tells WordSmith to compute the collocates of each word in the shakespeare.lst word-list, and save results as plain text files, one per word, in the c:\temp\shakespeare\collocates folder. The texts to be processed are the same text files used when the word list was created (and must still be present on disk to work, of course). Settings affecting the process are shown below. The first 6 have to do with the words from the word-list, and the min. in collocate-list refers to how many collocates of each word-list word are needed (here 10) for processing to be reported. Min. total column of a collocation display. column refers to the number in the total December, 2015. Page 107
123 108 Controller Results look like this: Here they're incomplete because I pressed the Stop button. 181 Each of these lists has the collocates output much as in a collocates display , but with the relationships also computed. The process only saves results where the settings shown above are met and where the relationships 395 also meet the requirements as in the WSConcgram settings. December, 2015. Page 108
124 109 WordSmith Tools Manual 427 See also : drag and drop 5.29 searching 5.29.1 search for word or part of word All lists allow you to search for a word or part of one, or a number. The search operates on the of data, though you can change the choice as in this screenshot, where current column Concordance is selected. The syntax is as in Concord. In the case of a concordance line, the search operates over the whole context so far saved or retrieved. So although is visible in the context kept wondering (highlighted to show you where) the search has found the phrase state schools tested about 80 words before the search word wondering . To search again, press OK again... Whole word – or bung in an asterisk The syntax is as in Concord, so by default a whole word search. To search for a suffix or prefix, use the asterisk. Thus *ed will find any entry ending in ed ; un* will find any entry starting with un. *book* book in it ( book, textbook, booked. ) will find any entry with 109 110 . , Search & Replace See also: Searching by Typing search by typing 5.29.2 Whenever a column of display is organised alphabetically, you can quickly find a word by typing. As WordSmith you type, WordSmith has will get nearer. If you've typed in the first five letters and December, 2015. Page 109
125 110 Controller found a match, there'll be a beep, and the edit window will close. You should be able to see the word you want by now. 428 109 110 , Searching for a word or part of one , Search & Replace , See also: Edit v. Type-in mode 72 315 Editing , WordList sorting search & replace 5.29.3 113 , allow for searching and replacing. Some lists, such as lists of filenames The point of it If your text data has been moved from one PC to another, or one drive to another, it will be necessary to edit all the filenames if WordSmith ever needs to get at the source texts, such as 234 when computing a concordance from a word list .) Search & Replace for filenames If you are replacing a filename you will see something like this. We distinguish between the path C:\texts\BNC\spoken\s conv\KC2.txt and the file's individual name, so that for a case like and the path to it is C:\texts\BNC\spoken\s conv . KC2.txt the filename is To correct the path to the file, e.g. if you've moved your BNC texts to drive Q:\my_moved_texts you might simply replace as shown here Q:\my_moved_texts C:\texts will get and all the filenames which contain e.g. c:\texts \BNC\spoken\s conv\KC2.txt will become Q:\my_moved_texts\BNC\spoken\s conv \KC2.txt. To rename a filename only, change the radio buttons in the middle of the window and the search and replace operation will ignore the path but replace within the filename only. Search & Replace for other data Viewer and Text Aligner , In this case the search & replace isn't of filenames but in the case below in December, 2015. Page 110
126 111 WordSmith Tools Manual 109 of the actual text. Like a current column of data. search operation, the search operates on the The context line shows what has been found. The line below shows what will happen if you agree to the change. : make 1 change (the highlighted one), then search for the next one Yes Sk ip : leave this one unchanged, search for the next one : change without any check Yes All : stop searching... Sk ip All Whole word – or bung in an asterisk The syntax is as in Concord, so by default a whole word search. To search for a suffix or prefix, use the asterisk. Thus *ed will find any entry ending in ; un* will find any entry starting with un. ed *book* will find any entry with book in it ( book, textbook, booked. ) 315 Word lists can be sorted by suffix: see WordList sorting . 420 109 109 , Accented Characters & Symbols See also: Searching by Typing . , Searching with F12 5.30 selecting or marking entries 5.30.1 selecting multiple entries 270 (lemmatisation). It can be necessary to select non-adjacent entries e.g. for Joining How to do it To select more than one entry in a word-list, concordance, key word list etc, hold down Control first, and click in the number column at the left edge. Ctrl To select various entries in this detailed consistency list, I held down the key and clicked at the December, 2015. Page 111
127 112 Controller numbers 8, 9, 10 and 16. 112 Alternatively, to mark entries you can choose Edit | in the menu. Mark (F5) 5.30.2 marking entries Non-adjacent entries can be marked by clicking the word and pressing Alt+F5. The first one marked will get a green mark in the margin and subsequent ones will get white marks. December, 2015. Page 112
128 113 WordSmith Tools Manual After marking these, I added by selecting two more words then pressing Alt+F5 scholarship(s) again. To undo a specific entry, press Alt+F5 again. To un-mark all entries, use Shift+Alt+F5. To lemmatise all marked entries, press F4 after marking. 5.31 filenames tab This tab shows the text file name(s) from which your current data comes. You can edit these names if necessary (e.g. if the text files have been moved or renamed.) To do so, choose Replace ( ). 101 Afterwards, if you save the results , the information will be permanently recorded. 430 See also: finding source files . 5.32 settings 5.32.1 save defaults 4 in the WordSmith Tools Controller . Settings can be altered by choosing Colour Settings Any setting menu item in any Tool gives you access to these: December, 2015. Page 113
129 114 Controller General, Folders, Colours, Languages, Tags, Lists, Concord, KeyWords, WordList, Index, Advanced, WSConcgram These tabs allow you to choose settings which affect one or more of the Tools. 60 customise the default colours colours 431 set WordSmith so it "knows" which folders you usually use folders 124 419 435 character set languages & numbers, , treatment of hyphens default file extension 447 80 , printing restore last file general 132 tags to ignore, tag file, tag file autoloading, custom tagsets tags 133 120 for Concord, KeyWords and Wordlist stop lists 92 270 to match up, or lemma files files matching to mark lemmas in a word list, etc. 180 Concord number of entries, sort system, collocation horizons 245 235 , database & associate , max. p value procedure KeyWords 447 minimum frequencies, reference corpus filename 448 303 WordList word length & frequencies, type/token # , cluster settings 276 making a word-list index Index 31 Advanced advanced settings 12 utility WSConcgram for the concgram permanent settings and wordsmith6.ini file You can save your settings with a button at the top of the Controller file, installed when you installed WordSmith Tools. This Or by editing the wordsmith6.ini specifies all the settings which you regularly use for all the suite of programs, such as your text and 78 60 431 87 results folders , screen colours , fonts , the default columns to be shown in a concordance, etc. 115 You can restore the defaults . show help file I n the general tab of Main Settings you will see a checkbox called "show help file". If checked, this will always show this help file every time WordSmith starts up. The point of this is for users who only use the software occasionally, e.g. in a site licence installation. sayings Using Notepad, you can edit Documents\wsmith6\sayings.txt , which holds sayings that 4 appear in the main Controller window, if you don't like the sayings or want to add some more. December, 2015. Page 114
130 115 WordSmith Tools Manual site licence and CD-ROM defaults f you're running WordSmith straight from a CD-ROM, your defaults cannot be saved on it as it's I read-only; Windows will find a suitable place for wordsmith6.ini , usually off the root folder of My Documents . 124 431 The first time you use WordSmith, you will be prompted to choose appropriate Folders , Text 132 Save All Settings details etc. and Characteristics, Tag for future use. You can change settings and save them as often as you like. Similarly, on a network you will usually not be allowed to change defaults permanently, as this would affect other users. Your network administrator should have installed the program so that you , where it may be both read and altered. If WordSmith wordsmith6.ini have your own copy of wordsmith6.ini in that folder it will be able to use your personal Tools finds a copy of preferences. 5.32.2 restoring settings How to find it Main settings | Advanced Settings | Settings can be restored to default settings by choosing Advanced | Restore . The point of it... You may have changed settings and cannot recall how to undo them... December, 2015. Page 115
131 116 Controller Factory Defaults Restores your wordsmith6.ini file to factory condition. Re-starts WordSmith with all relevant boxes filled in accordingly. Warning Messages Removes any of the messages you have received and where you've ticked the "never show this again" box. Customised Layouts 87 Removes any of the layouts you have saved for the various type of data WordSmith shows. Colours 60 Re-sets colours to the factory defaults. 5.33 source text(s) The point of it... The aim is to be able to see the whole text file your data came from, with some relevant words highlighted. December, 2015. Page 116
132 117 WordSmith Tools Manual How to do it The Concord and KeyWords tools both have areas which can show the source texts which your data was produced from, visible by choosing the source texts tab, if your texts are still where they were when the data analysis was done. (If they have been moved you can try editing the Filenames 110 data to correct this.) In Concord, you need to double-click the relevant concordance line to get the source text to show. In each case the relevant key or search words will be highlighted if possible. In KeyWords you'll see the source text in the source texts tab space, or if there are various source texts listed in a special window (shown below). Menu options (right-click to see these) Copy, Print, Save As their names suggest these menu items let you copy, save or print any text you've selected or the whole text. For saving you will get a chance to decide whether as plain text or as Rich Text Format (.RTF) preserving font and colour information. Next, Previous These jump you through the text one highlighted word at a time. You should see how many highlighted there are in the status bar. Grey markup, Clear markup, Restore markup Grey mark up lets you grey out all < > sections. December, 2015. Page 117
133 118 Controller Clear mark up simply cuts the tags out. (needed if Once you have cut out markup, the clear mark up option changes to restore mark up Concord is to jump to the correct location). Greying out mark-up is quite slow if the text is extensive. This shot shows its progress: Double-clicking the status bar gives you a chance to stop the process. KeyWords List of Source Texts If you right-click this window, you get a chance to see which texts contain which key words December, 2015. Page 118
134 119 WordSmith Tools Manual by clicking Frequencies , giving results like this: and if you double-click a highlighted word ( in the example above), you will be shown the THINK source text ( A01.txt ) with that word highlighted. If you simply click the file-name December, 2015. Page 119
135 120 Controller you get to see the text with all its key words highlighted. 5.34 stop lists Stop lists are lists of words which you don't want to include in analysis. For example you might want to make a word list or analyse key words excluding common function words like the, of, was, is, it . To use stop lists, you first prepare a file, using Notepad or any plain text word processor, which specifies all the words you wish to ignore. Separate each word using commas, or else place each one on a new line. You can use capital letters or lower-case as you prefer. You can use a semi- colon for comment lines. There is no limit to the number of words. Stop lists do not use wildcards 92 (match-lists may). There is a file called stop_wl.stp (in your \wsmith6 folder) which you could use as a basis and save under a new name. You'll also find basic_English_stoplist.stp there, based on top frequency items in the BNC. Or just make your own in Notepad and save it with as the file- .stp extension. If that is difficult, rename the .txt as .stp . Example ; My stop list for test purposes. THE,THIS,IS IT WILL Then select in the menu to specify the stop list(s) you wish to use. Separate stop lists Stop List , it is WordList, Concord and can be used for the programs. If the stop list is activated KeyWords in effect: that is, the words in it will be stopped from being included in a word list. If you wish always December, 2015. Page 120
136 121 WordSmith Tools Manual 113 to use the same stop list(s) you can specify them in as defaults . wordsmith6.ini To choose your stop list, click the small yellow button in the screenshot, find the stop list file, then press Load . You will see how many entries were correctly found and be shown the first few of them. With a stop list thus loaded, start a new word list. The words in your stop list should now not appear in the word list. continuous Normally, every word is read in while making the word list and stored in the computer's memory without checking whether it's the stop list. Eventually the set of words is checked in your stop list and omitted if it is present. That is much quicker. However, it means that for the most part, any 298 are computed on the whole text, disregarding your stop list. statistics continuous the processing will slow down dramatically since as every word is read in If you choose while making the word list, it will be checked against the stop list and ignored if found. In other December, 2015. Page 121
137 122 Controller and every single case THE and words, of IS etc. will be looked at as the texts are read in and OF sought in your stop list. The effect will be to give you detailed statistics which ignore the words in the stop lists. subtract wordlengths in statistics If you have not chosen continuous processing as explained above, you may want the statistics of your word list to attempt to deal in part with the stop list work done. With this choice, after the word list is computed, all the statistics concerning the number of types and tokens and 3-letter, 4-letter words etc. will be adjusted for the overall column (but not for the column for each single text) in your 298 . statistics 92 for a more detailed explanation, with screenshots. See Match List Another method of making a stop list file is to use WordList on a large corpus of text, setting a high minimum frequency if you want only the high-frequency words. Then save it as a text file. Next, use 362 Text Converter stoplist.cod as the Conversion file . the to format it, using stop lists in Concord In the case of Concord a stop list can do two jobs: first, it will cut the stop list words out as collocates. Additionally it can cut out any stop list words as search-word hits: for example if you and beautiful is in your stop list, any concordance lines containing beaut* concordance will get cut out (those containing will remain). For this to be activated, make beautiful beauty sure you check the search-word box in the settings. Stop lists ... are accessed via an Advanced Settings button in the Controller December, 2015. Page 122
138 123 WordSmith Tools Manual 92 141 274 , Lemmatisation , Match List . See also: Making a Tag File 5.35 suspend processing As WordSmith works its way through text files, or re-sorting data, you will see a progress window in button, too. the Controller with horizontal bars showing progress. If appropriate there'll be a Suspend Pressing this offers 4 choices: carry on ... as if you had not interrupted anything stop after this file Finishing the file means that you can keep track of what has been done and what there wasn't time December, 2015. Page 123
139 124 Controller for. (How? By examining the filenames in the word list, concordance or whatever you have just been creating.) stop as soon as possible ...useful if you're ploughing through massive CD-ROM files. WordSmith will stop processing the current file in the middle, but will retain any data it has got so far. panic stop The whole Tool (Concord or WordList, or whatever) will close down and some system resources 4 447 memory may be wasted. The Controller will not be closed down. Suspend again to effect your choice. Press text and languages 5.36 These settings affect how WordSmith will handle your texts. At the top, you see boxes allowing you to choose the language family (eg. English) and sub-type (UK, Australia etc.). These choices are determined by the preferences you have previously set. That is, the expectation is that you only work with a few preferred languages, and you can set these preferences once and then forget about 7 them. You do this by pressing the Edit Languages button. December, 2015. Page 124
140 125 WordSmith Tools Manual The choices below may differ for each language: hyphens and numbers You can also specify whether hyphens are to count as word separators. If the hyphen box is will be treated as two words. checked [X], self-access Should numbers be included in a word-list as if they were ordinary words? If you leave this checkbox blank, words like $300, 50.3M or 10th will be ignored in word lists, key words, concordances etc. and replaced by a #. If you switch it on, they will be included. SI numbers : the International System of Units (SI) stipulates that "Spaces should be used as a thousands separator (1 000 000) in contrast to commas or periods (1,000,000 or 1.000.000) to reduce confusion resulting from the variation between these forms in different countries." So numbers like 1,234,567.89 would be written 1 234 567.89. If you wish WordSmith to recognise such forms as one number each, leave this box checked, otherwise such a form in text would be counted as three successive numbers (1, 234, and 567.89). characters within word WordSmith automatically includes as valid alphabetical symbols all those determined by the operating system as alphabetical for the language chosen. So, for English, A to Z and common é . For Arabic or Japanese, whatever characters Microsoft have determined count accents such as as alphabetic. But you may wish to allow certain additional characters within a word. For example, in English, the father's apostrophe in is best included as a valid character as it will allow processing to deal with the whole word instead of cutting it off short. (If you change language to French you might not want apostrophes to be counted as acceptable mid-word characters.) Examples: ' (only apostrophes allowed in the middle of a word) (both apostrophes and percent symbols allowed in the middle of a word) '% (both apostrophes and underscore characters allowed in the middle of a word) '_ You can include up to 10. If you want to allow fathers' too, check the allow to end of word box. If this is checked, any of these symbols will be allowed at either end of a word as long as the character isn't all by itself (as ). in " ' " Plain Text/HTML/SGML 131 in HTML, SGML or XML our texts may be Plain Text in format: the default. If they are tagged Y 435 you should choose one of the options here. That way, the Tools can make optimum use of sentence, paragraph and heading mark-up. (start & end of) headings For the Tools to count headings, they need to know how to recognise the start and end of one. If 131 and , type and in here. (# stands for any your text is tagged e.g. with # is not the same as digit, ## for two, etc.) Whatever you type is case sensitive: . (If 435 and sometimes , text which is not consistent, using sometimes you have HTML 9 to make your texts consistent). then use Text Converter sections (start & end of) and , the Tools will treat identify sections. Again, If these boxes contain eg. whatever you type is case sensitive. December, 2015. Page 125 141 126 Controller (start & end of) sentences 425 auto , the Tools will treat sentences as defined If this space contains the word (ending with a full stop, question mark or exclamation mark, and followed by a capital letter), but if your text is tagged 131 and e.g. with , type those in here. Again, whatever you type is case sensitive. (start & end of) paragraphs For the Tools to recognise paragraphs, they need to know what constitutes a paragraph start and/or end, e.g. a sequence of two s (where the original author pressed Enter twice) or an 131 followed by a . For that you would type e.g. with . If your text is tagged and , you can type the tag in here. Case sensitive, too. In many cases you may consider that defining a paragraph end will suffice (considering everything up to it to be part of the preceding one). Much HTML text does not consistently distinguish between paragraph starts and ends. instead of , but you can leave here as Note that spoken texts in the BNC use instead if the text has no in it. WordSmith will use 7 120 131 . Processing text in Chinese , Choosing a new language , Stop Lists See also: Tagged Text . etc text dates and time-lines 5.37 The point of it ... -- that is, studying change through time. diachronically The idea is to be able to treat your text files You might want a concordance, for example, to be ordered by the text date. Or you might be interested in knowing when a certain word first appeared in your corpus and whether it gained web popularity in succeeding years. Or whether the collocates of a word like changed before 1990 and after. This screenshot shows a time-line based on concordancing energy/emissions/carbon in about 30 million words of UK newspaper text dealing with climate change, 2000-2010. December, 2015. Page 126 142 127 WordSmith Tools Manual The first line shows overall data where all results on three search-terms are merged. Concordance hits are represented as a graph with green lines and little red blobs for each time period. The grey rectangles and the grey graph line both represent the same background information, namely the amount of word-data searched. The difference is merely that the grey line is twice as high as the rectangles below it. The number of hits in each year is mostly roughly proportional to the amount of text being examined, though in 2006 and 2009 for the term it seems that the hit rate was slightly emissions higher. In the first half of the decade carbon was rather under-mentioned in proportion to the amount of climate-change data being studied. 48 See also choosing text files: setting file dates 5.38 window management 4 will be at the top left corner of your screen, half the screen The main WordSmith Tools Controller width and half the screen height in size. Other Tools will appear in the middle. Each Tool main window will come just below any previous ones. Make use of the Taskbar (or Alt-tab, which helps you to switch easily from one window to the next). "Start another Concord window"? to start another concordance. You will see this if you already have a window of data and press New windows open for each Tool, each with different data . You can have any number of minimising, moving and resizing windows windows can be stretched or shrunk by putting the mouse cursor at one edge and pulling. They All can be moved most easily by grabbing the top bar, where the caption is, and pulling, using the mouse. You can minimise a window: it becomes an icon which you restore by clicking on it. If you maximise it, it will fill the entire screen of the Tool concerned. These are standard Windows 4 window Controller functions. It's okay to minimise the main when using individual Tools. tile and cascade or Cascade the Tools from the main WordSmith Tools program. You can Tile restore last file A convenience feature: the last file you saved or retrieved will by default be restored when you re- enter WordSmith Tools. I've kept it to one only to avoid screen clutter! This feature can be turned off (in yo ur Documents temporarily via a settings option or permanently in wordsmith6.ini folder). You can also generally access your last saved result in any Tool by right-clicking \wsmith6 and choosing last file: December, 2015. Page 127 143 128 Controller 5.39 word clouds The point of it... Many of the lists in WordSmith offer a word cloud feature similar to those you have probably seen on 450 . the web. The idea is to promote pattern-noticing How to get here This function is accessed from the Compute menu, sub menu-item Word Cloud ( ) in the various Tools. Examples 189 189 you can get a word cloud based on any column of data. In this case of collocates of cash With Concord clusters based on , this example was computed: cash In the case of key words, you can get something like this: December, 2015. Page 128 144 129 WordSmith Tools Manual In this case the word cloud was based on the key words of a novel, Bleak House (Charles Dickens). The highlighted word Guppy is the name of one of the characters and details of this word are shown to the right. What you can see and do The Copy and Print buttons do what their names suggest. The Refresh button recalculates the cloud, e.g. after you have deleted items in the original data. As your mouse hovers over a word in the cloud you get details of that individual word. You can change the word cloud settings in the main Colours setting in the Controller. The font sizes range from a minimum of 8 to a maximum of 40 depending on the range of values in 78 is the one you may choose for any of your standard displays. your data. The font 5.40 zap unwanted lines To restore the correct order to your data after editing it a lot or marking lines for deletion, press the Zap button ( or Ctrl+Z). This will permanently cut out all lines of data which you have deleted (by pressing Del) unless you've restored them from deletion (Ins). In the case of a word list, it will also re-order the whole file in correct frequency order. Any deleted s may still be entries are lost at this stage. Any which have been assigned as lemmas of head word viewed, before or after saving. However, after zapping, lemmas can no longer be undone. In the case of a concordance, you may wish the list of filenames to be re-computed to reflect only the files still referred to in your concordance. To do that, choose Compute | Filenames . 70 . See also : reduce data to N entries December, 2015. Page 129 145 WordSmith Tools Manual Tags and Markup Section VI 146 131 WordSmith Tools Manual Tags and Markup 6 6.1 overview What is markup for? Marked up text is text which has extra information built into it with tags, e.g. "We like spaghetti.". You may wish to concordance words or tags... You may wish to see this additional information or ignore it, so that you just see the plain text ("We like spaghetti."). WordSmith has been designed so that you can choose what to ignore and what to see. 435 tags or entity references: if your text has É You may want to translate HTML or SGML É you probably want to see . You may wish to select within text files, e.g. cutting out a header or getting only the conclusions, instead of using the whole text. And you might want to get WordSmith to choose only files meeting certain criteria, e.g. having " " in a text file header section, where the speaker is a woman. sex=f You can see the effect of choosing tags if you select the Choose Texts option, then press the View 390 button. Any retained tags will be visible, and ignored tags replaced by spaces. Tags and Markup Settings ... are accessed via an Advanced Settings button in the Controller 141 132 , Handling Tags , Showing Nearest See also: Guide to handling the BNC , Making a Tag File 145 199 212 198 , , Types of Tag , Tag Concordancing , Concord Sound and Video Tags in Concord 153 311 134 390 , Tags in WordList , Using Tags as Text Selectors Viewing the Tags , XML text December, 2015. Page 131 147 132 Tags and Markup 6.2 choices in handling tags ignore all tags Specify all the opening and closing symbols in Main Settings | Advanced | Tags |Mark -up to ignore and such tags will be simply left out of word lists and concordances, as if they weren't in the original text files. example : <*> < symbol and ending at This will cut out all wording starting at each > symbol (up to 200 characters apart). (You can put more the next than one pair of brackets, e.g. <*>;[*] if you like.) ignore some tags and retain others 141 If you want to ignore some but retain others, you will need to prepare a tag file which lists all those you want to keep. These will then appear in your word lists and concordances. You get WordSmith Tools to read this text file in by choosing the Tag File menu option under Settings. Such tags will then be incorporated into your word lists, concordances, etc. as if they were ordinary words or suffixes. <*> , <body> and example: supposing you've set as "tags to ignore", but listed as tags to retain in your tag file, WordSmith will keep any instances of <title> <conclusion> , <body> or <conclusion> in your data but will ignore <introduction>, <Ulan Bator>, <threat> , etc. Tags to retain will only be active if there's a file name visible and you have pressed the Load or Clear button. If you press Load , you will see which tags have been read in from the tag file. translate entity references into other characters 435 tagged text, you may want to translate symbols. For example, If you use XML, SGML or HTML 141 SGML, XML, HTML use instead of a long dash. To do this, first prepare a Tag File — which contains the strings you want to translate. Then choose Main Settings | Advanced | Tags & and choose your entity file. WordSmith will then Mark up | Entity File (entities to be translated) translate any entity references in this file into the corresponding characters. to load up these tag files automatically 133 . See Custom Settings 141 131 , Overview of Tags , Making a Tag File , Showing Nearest See also: Guide to handling the BNC 390 199 198 145 Tags in Concord , Using Tags as , Viewing the Tags , Types of Tag , Tag Concordancing 315 134 , Tags in WordList Text Selectors December, 2015. Page 132</p> <p><span class="badge badge-info text-white mr-2">148</span> 133 WordSmith Tools Manual 6.3 custom settings Custom Tagsets In the main Settings | Tags & Mark up window, you may see custom settings choices like this. The point of it... The point of this choice is to change a whole series of settings according to the type of corpus you wish to process. When you change the setting above, any valid data as explained below will get loaded into your defaults. How to do it Press the Edit button to create or edit the custom settings (the file is called custom_tag_settings.xml and it'll get saved in your Documents\wsmith6 folder). December, 2015. Page 133</p> <p><span class="badge badge-info text-white mr-2">149</span> 134 Tags and Markup Add Shakespeare for processing the To start a new set, press and give a suitable name (such as Shakespeare corpus ). Fill in the boxes and press Save. All boxes will have leading and trailing spaces removed. · · Use auto for automatic processing e.g. of sentence ends. box means that this set gets chosen by default and any tag or entity files · Checking the default will get automatically loaded for you. 134 See also : Tags as text selectors 6.4 tags as selectors Defaults 44 all sections of all texts selected in Choose Texts but cut out all angle- The defaults are: select bracketed tags. Custom settings December, 2015. Page 134</p> <p><span class="badge badge-info text-white mr-2">150</span> 135 WordSmith Tools Manual There are various alternatives in this box which help your choices with the boxes below. Choosing British National Corpus World Edition (as in the screenshot) will for example automatically put </ 133 into the Document header ends box below. You can also edit the options and their teiHeader> effects. Markup to ignore 435 files, leave something like [ ] or If you want to cut out unwanted tags eg. in HTML or < >; < > in . The "search-span" means how far should WordSmith look for a closing [ ] Mark up to ignore > after it finds a starting symbol such as symbol such as . (The reason is that these symbols < might also be used in mathematics.) Markup to INclude or EXclude December, 2015. Page 135</p> <p><span class="badge badge-info text-white mr-2">151</span> 136 Tags and Markup 141 Making a Tag File . See Entity file 141 Making a Tag File . See Text Files and Mark-up WordSmith to use tags to select one section of a text and ignore the rest. However, you can get texts: that is, get WordSmith to look This is "selecting within texts". You can also select between within the start of each text to see whether it meets certain criteria. 137 Main Settings | Advanced | Tags | Only If Containing These functions are available from or Only 138 . Part of File Document Header When you process a set of texts usually containing a standard header (e.g. a copyright notice) you may wish to remove it automatically. Ensure that some suitable tag is specified as above in the </teiHeader> example. (If you choose Custom Settings above, you will get suitable choices automatically.) The process cuts by looking for the Document header ends mark-up and deleting all text to that point. (If you have a header repeated in the same text file, WordSmith will need to be told what mark-up is used for Document 138 to get such headers removed.) too , and you will need to choose Only Part of File header starts 137 For more complex searches, you might want to choose the Only If Containing or Only Part of 138 buttons visible above. File The order in which these choices are handled If you choose either to select either between or within texts, WordSmith will check that each text file meets your requirements, before doing your concordance, word list, etc. It will 137 to check whether it contains the words you've specified; 1. Select between files 138 "; 2. Cut out any section specified as a "section to cut 138 ", cut out everything which is not within them; 3. If there are "sections to keep December, 2015. Page 136</p> <p><span class="badge badge-info text-white mr-2">152</span> 137 WordSmith Tools Manual 138 4. Cut start of each line , if applicable; 132 ; 5. Process any entity references you want to translate 132 any tags not to be retained (see the "Mark-up to ignore" section of the screenshot 6. Ignore above). 198 131 141 132 , , Tag Concordancing , Tag Handling , Making a Tag File See also: Overview of Tags 199 145 390 , Types of Tag , Viewing the Tags Showing Nearest Tags in Concord , Guide to handling the 153 , XML text BNC 6.5 only if containing... The point of it You might want to process only the speech of elderly men, or only advertising material, or only classroom dialogues. This function allows WordSmith to search through each text, e.g. in text headers, ensuring that you get the right text files and skip any irrelevant ones. Suppose you have a large collection of texts (e.g. the British National Corpus) and you cannot remember which of them contain English spoken by elderly men. sex=m for males, age=5 for speakers aged Knowing that the BNC uses stext> for spoken texts, 60 or more, you can get WordSmith to filter your text selection. It will search through the whole of 437 every text file (not just the tags or header sections, in fact the first 2 megabytes of the file) to check that it meets your requirements. You can specify up to 15 tags, each up to 80 characters in length. They will be case sensitive (i.e. by mistake). Age=5 you will get nothing if you type Horizontally, the options represent combinations linked by "or". Vertically, the combinations are "and" links. The bottom set represents "but not" combinations. After your text files have been processed, you will be able to see which met your requirements in 44 50 the Text File choose window . and can save the list for later use as favourites Examples: December, 2015. Page 137</p> <p><span class="badge badge-info text-white mr-2">153</span> 138 Tags and Markup roses violets or seeds , and flowers must be You only want text files which contain either or garden and spade present too, so must lime juice to be present in the But you do not want text. If you want book or hotel but only if they're not in a text file containing publish or Booker hotel Prize : write book into the first box, in in the box beside it, and publish* and Booker * the first two boxes in the bottom row. 134 138 359 , Selecting within texts , Extracting text sections , Filtering See also: Tags as Selectors 354 360 , Guide to handling the BNC using Text Converter your text files part of file:selecting within texts 6.6 The point of it The aim is to let you get WordSmith to process only specific parts of your text files, getting rid of chunks you're not interested in. Cut out or Keep? Keep tab to choose to cut out certain sections, and/or only to use certain sections. or Cut Press the December, 2015. Page 138</p> <p><span class="badge badge-info text-white mr-2">154</span> 139 WordSmith Tools Manual Sections to Cut Note: if you only want to remove a document header such as </header> , it is easier to do that in 134 the general tag settings , section Document Header. For more complex choices, you may here specify what is to be cut, where it starts (for example <introduction> ) and where you want to cut to (e.g. </introduction> ). You can choose to cut out up to 7 different and separate sections ( <HEAD> to </HEAD> ). This <BODY> to or </BODY> function is case-sensitive and cuts out any section located as many times as it is found within the whole text. Cut start of each line/paragraph The point of this is that some corpora (e.g. LOB) have a fixed number of line-detail codings at the start of each line. Here you want to cut them out (that is, after every <Enter>). Choose the number of characters to cut, up to 100; the default is 0. Use -1 if you want to cut everything up to the first alphabetical character at the start of each line, and -2 to cut everything up to the first tab. Sections to Keep (contexts) December, 2015. Page 139</p> <p><span class="badge badge-info text-white mr-2">155</span> 140 Tags and Markup You want to select just one or two sections of the text and cut out the rest. Specify one tag to define the desired start, and one to specify the end, e.g. Body> <Intro> to < Mary> </Mary> (these would get all of Mary's (these would analyse only text introductions), or < to contributions in the discourse but nothing else). <Peter> to Here we have chosen to use 2 different sections, to get the sections </Peter> spoken by Peter and to </Hong Kong> to get the sections marked up as referring <Hong Kong> to Hong Kong as well. < or > symbol to define each Naturally you must be sure that there is something unique like a <PETER> section. This function is case sensitive (so it would not find ). 435 If you used to </H1> with this function in HTML text you'd get all the major headings in <H1> your texts, however many, but nothing else. <INTRO> The "off" switch doesn't have to look like the "on" switch -- you could keep, for example, to </BODY> and thereby cut out the conclusion if that comes after the </BODY> . Ignore text files not containing choices If this is checked, your text files will be examined to ensure they contain the mark-up for sections to <Peter> and <Hong Kong>) . keep (here OK Once you've pressed OK, you will see that WordSmith knows you want only certain parts of each file because the Only Part of File button goes red (as will the Only if Containing button if there were December, 2015. Page 140</p> <p><span class="badge badge-info text-white mr-2">156</span> 141 WordSmith Tools Manual sections to keep and the Ignore text files not containing choices box was checked). 134 137 , Guide to handling the BNC , Only if containing <x> See also: Tags as Selectors . 6.7 making a tag file Tag Syntax Each tag is case sensitive. and end with > but the first & last characters of the tag can be any Tags conventionally begin with < symbol. You can use * to mean any sequence of characters; to mean any one character; ? # to mean any numerical digit. [ to insert comments in a tag file, since [ is useful as a potential tag symbol. You can Don't use # to represent a number (e.g. <h#> will pick up <h5>, <h1> , etc.). And use ? to represent use <?> <s>, <p> , etc.), or * to represent any number of characters any single character ( will pick up (e.g. <u*> will pick up <u who=Fred>, <u who=Mariana> , etc.). Otherwise, prepare your tag 120 . Stop Lists list file in the same way as for or any other plain text editor, to create a new .tag file. Write one entry on each Use notepad . line Any number of pre-defined tags can be stored. But the more you use, the more work WordSmith has to do, of course and it will take time & memory ... Mark-up to EXclude December, 2015. Page 141</p> <p><span class="badge badge-info text-white mr-2">157</span> 142 Tags and Markup <SCENE>A public library in London. A A tag file for stretches of mark-up like this bald-headed man is sitting reading the News of the World.</SCENE> where you want to exclude the whole stretch above from your concordance or word list, e.g. because you're processing a play and want only the actors' words. Mark-up to exclude will cut out the whole string from the opening to the closing tag inclusive. For the Shakespeare corpus , a set of tags to EXclude might be used. (The idea is not to process any stage directions when processing the Shakespeare corpus.) The syntax requires ></ or >*</ to be present. Legal syntax examples would be: <SCENE></SCENE> <SCENE>*</SCENE> <SCENE #>*</SCENE> <HELLO?? #>*</GOODBYE> is followed by 2 characters, a space and a number then (In this last example it'll cut only if <HELLO > , and if is found beyond that.) </GOODBYE> <SCENE>* </SCENE> won't work, because both parts of the tag must be on the same line. <SCENE>*<\SCENE> won't work, because the slash must be / . With your installation you will find ( Documents\wsmith6\sample_lemma_exclude_tag.tag ) in cluded, which cuts out lemmas if constructed on the pattern <lemma tag="*>*</lemma> , i.e. with the word tag , an equals sign and a double-quote symbol, regardless of what is in the double- quotes. Mark-up to INclude A tag file for tags to retain contains a simple list of all the tags you want to retain. Sample tag list December, 2015. Page 142</p> <p><span class="badge badge-info text-white mr-2">158</span> 143 WordSmith Tools Manual files for BNC handling (e.g bnc world.tag ) are included with your installation (in your Documents\wsmith6 folder): you could make a new tag file by reading one of them in, altering it, and saving it under a new name. 60 colour Tags will by default be displayed in a standard tag (default=grey) but you can specify the foreground & background for tags which you want to be displayed differently by putting /colour="foreground on background" e.g. <noun> /colour="yellow on red" Available colours: 'Black','White','Cream', 'Red','Maroon', 'Yellow', 'Navy','Blue','Light Blue','Sky Blue', 'Green','Olive','Dollar Green','Grey-Green','Lime', 'Purple','Light Purple', 'Grey','Silver','Light Grey','Dark Grey','Medium Grey'. The colour names are not case sensitive (though the tags are). Note UK spelling of "grey" and "colour". Also, you can put "/play media" if you wish a given tag, when found in your text files, to be able to 147 attempt to play a sound or video file . For example, with a tag like <sound *> /colour="blue on yellow" /play media and a text occurrence like <sound c:\windows\Beethoven's 5th Symphony.wav> or <sound http://www.political_speeches.com/Mao_Ze_Dung.mp3> 212 . you will be able to choose to hear the .wav or .mp3 file Finally, you can put in a descriptive label, using /description "label" like this: <w NN*> /description "noun" /colour="Cream on Purple" <ABSTRACT> /description "section" <INTRODUCTION> /description "section" <SECTION 1> /description "section" Tagstring_only tags You can also define two tags as ones you want to use to mark the beginnings and ends of what will be shown in a concordance using /tagstring_only as a signal. For example, if concordancing and , you may want to see only the title text containing titles marked out with text. You'd include in the tag file <title> /tagstring_only /tagstring_only in Concord's To get Concord to show only the text between these two, choose View | Tag string only menu. Section tag In the examples using "section", Concord's "Nearest Tag" will find the section however remote in the text file it may be. December, 2015. Page 143 159 144 Tags and Markup This is particularly useful e.g. if you want to identify the speech of all characters in a play, and have a list of the characters, and they are marked up appropriately in the text file. /description "section" /description "section" /description "section" Here is an example of what you see after selecting a tag file and pressing "Load". The first tag is a "play media" tag, as is shown by the icon. You can see the cream on purple colour for nouns too. The tag file ( BNC World.tag ) is included in your installation. Entity File (entities to be translated) If you load it you might see something like this: December, 2015. Page 144 160 145 WordSmith Tools Manual A tag file for translation of one entity reference into another uses the following syntax: entity reference to be found + space + replacement. Examples: É É é é the sample tag file for translation ( Documents\wsmith6 In the screenshot above, ) which is in cluded with your installation has been loaded. You could make a new \sgmltrns.tag one by reading it in, altering it, and saving it under a new name. 132 199 131 Handling Tags See also: Showing Nearest Tags in Concord , Tag , , Overview of Tags 198 145 134 390 Using Tags as Text Selectors , Guide Concordancing , Types of Tag , Viewing the Tags , to handling the BNC . 6.8 tag-types You will need to specify how each tag type starts and ends, and you should be consistent in usage. Restrict yourself to symbols which otherwise do not appear in your texts. eight special markers Eight kinds of marker may be marked as significant for word lists: those which represent starts and 147 147 147 147 ends of headings . Type these in the appropriate and paragraphs , sentences , sections 124 . spaces when selecting Text Characteristics December, 2015. Page 145 161 146 Tags and Markup 427 tags within 2 separators These tags are often used to signal the part of speech of each word; they're also widely used in 435 # to switch HTML, XML, SGML to switch on Heading 1 style and for "switches", e.g. it off again. You should use the same opening and closing symbols, usually some kind of brackets, 435 markup): for all your tags (as the British National Corpus does using SGML or XML . ,, entity references 435 HTML, XML and SGML use so-called entity references for symbols which are outside the été which represents standard alphabet, e.g. . été Specify these two types of markup by choosing Settings/Tag Lists, or Settings/Text Characteristics/ Tags. You will then see a dialogue box offering Text to Ignore and a Browse button. 132 option allows you to specify tags which you do not want to see in the The Tags to Ignore concordance or word list results. 141 The Tags to be INcluded option allows you to specify a tag file, containing tags which you do want to see in the concordance or word list results. 141 The Tags to be EXcluded option allows you to specify a different tag file, containing stretches of tags which you want to find and remove in the concordance or word list results. 132 The Tags to be Translated option allows you to specify entity references which you want to é . convert on the fly, such as multimedia markers Text files can be tagged for reference to sound or video files which you can hear or see. For example, a text might contain something like this: blah blah blah ... blah blah etc. A concordance on blah blah 147 . could pick up the tag so you can hear the source mp3 file. See defining multimedia tags 131 132 141 , Showing Nearest Tags in See also: Overview of Tags , Handling Tags , Making a Tag File 390 198 199 134 , Viewing the Tags Concord , Tag Concordancing , Using Tags as Text Selectors , 212 Concord Sound and Video , Guide to handling the BNC . (A particular sub-variety of tags within 2 separators sometimes used is tags with underscores at the left and space at the right like this He_PRONOUN entered_VERB the_DET room_NOUN . 125 To process these, you will need to declare the underscore a valid character , or else convert your 368 corpus to a format like. He entered the room .) 6.9 start and end of text segments WordSmith attempts to recognise 4 types of text segment: sentences, paragraphs, headings, sections. Processing is case sensitive. You can use and as strings representing is another option. an end of paragraph or a tab in your texts. For sentence ends, auto December, 2015. Page 146 162 147 WordSmith Tools Manual 81 . Define these in your language settings Sentences the end. If you leave the For example, might represent the beginning of a sentence and 426 choice as auto , ends of sentences are determined by according to the definition of a sentence 390 of handling sentence recognition.) which gives a approximation. (There is no 100% accurate way Paragraphs might represent the beginning of a paragraph and For example, or the end. Headings the end. Note that the British For example, might represent the beginning and National Corpus marks sentences within headings. Eg. Introduction HXL . It seems odd for the one word Introduction to count as a sentence, so WordSmith in text does not use sentence-tags within headings. Sections the end. For example, might represent the beginning and etc. is encountered. , Each of these is counted preferably when its closing tag such as If there are no closing tags in the entire text then paragraphs will be counted each time the opening paragraph tag is found. 199 132 131 See also: Overview of Tags , Handling Tags , Showing Nearest Tags in Concord , Tag 145 198 134 390 , Using Tags as Text Selectors Concordancing , Types of Tag , Viewing the Tags , Guide . to handling the BNC 6.10 multimedia tags In this screenshot you see an example of how to define your multimedia tags. This is accessed from Main Settings | Advanced | Tags | Media Tags . December, 2015. Page 147 165 150 Tags and Markup User-defined categories 168 For example, suppose you have marked your concordance lines' Set column like this: where the first line with miracle pre-modifies the noun cure and is marked a djectival but the second is an ordinary noun, and wish to save this in your original source text files. How to do it Choose Compute | Modify Source Texts . and if you want to save the Set choices, choose OK here: December, 2015. Page 150 166 151 WordSmith Tools Manual and the set choices will be marked as in this example: (seen by double-clicking the concordance line to show the source text). Multi-word unit search phrase Alternatively if you choose the search-phrase option: and December, 2015. Page 151 167 152 Tags and Markup then any search word containing a space will have underscores (or whatever other character you choose above) in it to establish multi-word units: Here, the search word or phrase was Rio de Janeiro , and the result of modifying the source texts was this: Add Time & Date stamp option This keeps a log of all your changes, enabling the changes to be undone later. Initials option Adds your initials to the changes. Leave empty if not wanted. The tag above means a user whose initials were MS made this change and it was the 3rd change. December, 2015. Page 152 168 153 WordSmith Tools Manual To undo previous changes If you have used the "time and date stamp" option shown above, you will be able to undo the modifications. The undo window shows all your log. You can choose all those done on a certain day, or by the person whose initials are visible at the right. Here we see the 4 modifications changing Rio de Janeiro into Rio_de_Janeiro . 168 See also: user-defined categories 6.12 XML text What is XML? XML text has angle-bracketed mark-up which provides additional information. For example the British National Corpus has text which is structured like this: I mean , where do eating disorders come from ? ... signals a sentence signals that the next word is a pronoun (coded PNP ), head-word is "i", signals that the next word is a plural noun belonging to the head-word " disorder" and it's a substantive. c5="NN2" is another attribute. There can be attribute of the is an start-tag, hw="disorder" 169 154 Tags and Markup WordSmith's handling of XML By default, WordSmith simply ignores all the mark-up so a word list will only get the words in black inserted in it, a concordance will only see those words ( I mean, where do eating disorders come from? ). Searching using Attributes If you want to search for all instances of NN2 forms (plural nouns), you'd need to type * as your search-word and answer yes to the question as to whether you're concordancing on tags. You would get results like this: Hide the mark-up If you prefer not to see all that the mark-up in grey, choose to hide the undefined mark-up December, 2015. Page 154 170 155 WordSmith Tools Manual There is a button in the main tool which can show or hide mark-up, too. Asterisks in your search-word In the example above, we search on * because each start-tag where NN2 forms are found starts with and another asterisk because the word which follows will be right next to the > our corpus. For two successive parts of speech, * * looks for any article (the/a/an) followed by any singular count noun. A search on * where we are allowing NN1 or NN2 and requiring the hw to be player ,gets results like this: Another example Searching Italian .XML containing text like this: December, 2015. Page 155 171 156 Tags and Markup and wishing to find all cases of the ARTPRE part of speech, with the search-word specified like this and answering yes to this: we get a considerable concordance with entries like this: (I have no idea why there are % symbols in the source .XML, by the way.) See also : Handling the BNC December, 2015. Page 156 172 WordSmith Tools Manual Concord Section VII 173 158 Concord 7 Concord 7.1 purpose 159 a program which makes a concordance using plain text or web text files. Concord is 159 seek in all the text files To use it you will specify a search word or phrase , which Concord will you have chosen. It will then present a concordance display, and give you access to information about collocates of the search word, dispersion plots showing where the search word came in each file, cluster analyses showing repeated clusters of words (phrases) etc. The point of it... The point of a concordance is to be able to see lots of examples of a word or phrase, in their contexts. You get a much better idea of the use of a word by seeing lots of examples of it, and it's by seeing or hearing new words in context lots of times that you come to grasp the meaning of most of the words in your native language. It's by seeing the contexts that you get a better idea about how to use the new word yourself. A dictionary can tell you the meanings but it's not much good at showing you how to use the word. Language students can use a concordancer to find out how to use a word or phrase, or to find out example, it's through using a which other words belong with a word they want to use. For can describe , concordancer that you could find out that in academic writing, a , or show , paper claim believe or want (* this paper wants to prove that ...). though it doesn't Language teachers can use the concordancer to find similar patterns so as to help their students. They can also use Concord to help produce vocabulary exercises, by choosing two or three search- 168 97 . them out, then printing words, blanking through a database of hospital Researchers can use a concordancer, for example when searching , grease accident records, to see whether ladder . Or to examine fracture is associated with fall , . land ownership historical documents to find all the references to Online step-by-step guide showing how index 7.2 Explanations 455 What to do if it doesn't do what I want... 158 What is Concord and what's it for? 179 Collocation 181 Collocation Display 191 Plots December, 2015. Page 158 174 159 WordSmith Tools Manual 175 Clusters 207 Patterns Settings 44 Choosing texts 180 Collocate horizons 187 Collocate settings 163 Concordance settings 202 Context word 222 Main Controller Concordance Settings 199 Nearest Tag 159 Search word or phrase 198 Tag Concordancing 131 Tagged Texts 124 Text settings Procedures 165 What you can See and Do 220 Altering the View 168 Blanking Out a Concordance 208 Re-sorting a Concordance 207 Removing Duplicate lines 189 Re-sorting Collocates 168 User-defined categories 206 Editing Concordances 262 Merging Concordances 212 Sound and Video in Concord 2 see also : WordSmith Main Index 7.3 what is a concordance? might look a set of examples of a given word or phrase, showing the context. A concordance of give like this: ... could not give me the time ... ... Rosemary, give me another ... ... would not give much for that ... A concordancer searches through a text or a group of texts and then shows the concordance as output. This can be saved, printed, etc. 7.4 search-word or phrase search word syntax 7.4.1 By default, Concord does a whole-word non-case-sensitive search. Basic Examples finds search word book book Book BoOk or or December, 2015. Page 159 175 160 Concord book* book , book s, book ing, book ed *book textbook (but not textbook s ) b* banana, baby, brown etc. *ed walk ed, wanted, pick ed etc. bo* in book in, book s in, book ing in (but not book into ) book * hotel book a hotel, book the hotel, book my hotel bo* in* book in, book s in, book ing in, book into book? book , book s, book ; book . book^ book , book s b^^k book , back , bank , etc. ==book== book (but not BOOK or Book ) book/paperback book or paperback symbol meaning examples tele* * disregard the end of the word, *ness disregard a whole word *happi* book * hotel Engl??? ? any single character (including ?50.00 punctuation) will match here$# any sequence of numbers, 0 to # £#.00 9 Fr^nc^ ^ any single letter of the alphabet will match here ==French== case sensitive == ==Fr*== c:\text\frd.txt :\ means use a file for lots of search- words (see file-based 161 search_words ) may/can/will / separates alternative search- words. You can specify alternatives within an 80- character overall limit <> beginning & end of tags Advanced Search-word Syntax If you want to use *, ? , == , #, ^ , :\, >, < or / as a character in your search word, put it in double quotes. Examples: "*" Why"?" and"/"or ":\" "<" December, 2015. Page 160
176 161 WordSmith Tools Manual Don't forget that question-marks come at the end of words (in English anyway) so you might need *"?" If you need to type in a symbol using a code, they can be symbolised like this: {CHR(xxx)} where is the decimal number of the code. Examples: {CHR(13)} is a carriage-return, xxx {CHR(9)} {CHR(10)} which comes at the is a line-feed, is a tab. To represent {CHR(13)} end of paragraphs and sometimes at the end of each line, you'd type {CHR(10)} which is carriage-return followed immediately by line-feed. {CHR(34)} refers to double inverted commas. {CHR(46)} is a full stop. There is a list of codes at http://www.asciitable.com/ #x9 #x22 for double inverted commas. You can also use hex format for numbers, e.g. for tab, Tags You can also specify tags in your search-word if your text is tagged. Examples: meaning examples symbol * single common noun (BNC World) book, chair, elephant book, chair, * single common noun (BNC XML edition) elephant * book, chairs singular or plural common noun T or t table, teacher t* any single noun beginning with campaign * * two single common nouns in sequence manager 153 for XML formats see XML text handling 198 202 149 225 , , Modify source texts , Ignore punctuation See also: Tag Concordancing , Context Word 28 Wildcards file-based search-words 7.4.2 The point of it... To save time typing in complex searches. You may want to do a standard search repeatedly on different sub-corpora. Or as Concord allows an almost unlimited number of entries, you may wish to do a concordance 159 . involving many search-words or phrases The space for typing in multiple search-words is limited to 80 characters (including / etc.). If your preferred search-words will exceed this limit or you wish to use a standardised search, you can prepare a file containing all the search-words. How to do it... December, 2015. Page 161
177 162 Concord Documents\wsmith6\concordance_search_words.txt A sample ( ) is included with the distribution files. Use a Windows editor (e.g. Notepad) to prepare your own. Each one must be on a separate line of your file. No comment lines can be included, though blank lines may be inserted for readability. context:= as in this example: If you want to require a context for a given word, put book context:=hotel (which seeks book and only shows results if hotel comes in the context horizons). Then, instead of typing in each word or phrase in the Search Word dialogue box, just browse for the file. to read the entries (or Then press Load if you change your mind). Clear Lemmas and file-based concordancing 438 from WordList, and the highlighted word in the word Note that where Concord has been called up 270 , a temporary file will be created, listing the whole set of list is the head entry with lemmas lemmas, and Concord will use this file-based search-word procedure to compute the concordance. The temporary file will be stored in your Documents\wsmith6 folder unless you're running on a \windows\temp . It's up to you to network in which case it'll be in Windows' temporary folder, e.g. delete the temporary file. Automated file-based concordances If you want Concord to process a whole lot of different search-words, saving each result as it goes along so you can get a lot of work done with WordSmith unattended, choose SW Batch under December, 2015. Page 162
178 163 WordSmith Tools Manual 163 . Concordance Settings search-word and other settings 7.4.3 Search Word or Phrase and/or Tag 159 Type the word or phrase Concord will search for when making the concordance, or (below) the 161 435 name of a file of search words . You may also choose from a history list of your previous 159 or the set of examples shown in search words. For details of syntax, see Search Word Syntax this screenshot: 161 If you want to do many concordances in a file-based search , first prepare a small text file containing the search words, e.g. containing this that the other ==Major*== Press the file button to locate your text file, the press the Load button. This will then change its name to something like , where 4 means as in the example above that there are 4 different Clear 4 search-words to be concordanced. See "Batch" below for details on saving each one under a separate filename, otherwise all the searches will be combined into the same concordance. Advanced searches December, 2015. Page 163
179 164 Concord lemma list search 274 . If the lemma file you've loaded This option requires you to have chosen and loaded a lemma file speak -> speaks, spoke, spoken then if your search-word is speak , specifies for example the concordance will contain examples of all four forms. Context word(s) and search horizons You may wish to find a word or phrase depending on the context. In that case you can specify context word(s) which you want, or which you do not want (and if found will mean that entry is not used). For example, if the search word is and the context word is hotel , you'll get book, book* , but only if hotel is found within your Context books, booked, booking, bookable 202 . Or if the search word is book* and the exclude if box has hotel , you'll get Search Horizons , as long as book, books, booked, booking, bookable not found within your hotel is and the exclusion specifies fish *ish context search horizons. Or if the search word is , you'll get yellowish, greenish , etc. but not fish . book with a context word < ADJ>* in You may type tag mark-up in here too, e.g. search for position up to L3 will find book with a preceding adjective if your text has that sort of mark-up and if you've defined a tag file including . In the screenshot above you see that "stop at sentence break" has been selected, meaning that a collocation search will only go left or right of the search-word up to a sentence-end. This is further 188 explained here . December, 2015. Page 164
180 165 WordSmith Tools Manual Batch Suppose you're concordancing book* in 20 text files: you might want One concordance based on 39 which can be all 20 files (the default), or instead 20 separate concordances in a zipped batch 161 viewed separately (Text Batch). If you have multiple search-words in a file-based search as explained above, you may want each result saved separately (SW Batch ). Other settings affecting a concordance are available too: 420 222 ; Typing characters , see WordSmith Controller Concordance Settings 81 419 202 Accented characters ; Choosing Language , Context Word(s) & Context Search Horizons advice 7.5 You have a listing showing all the concordance lines in a window. You can scroll up and down and left or right with the mouse or with the cursor keys. Sort the lines If you have a lot of lines you should certainly sort them. A concordance is read vertically, not horizontally. You are looking for repeated patterns, such as the presence of lots of the same sorts of words to the right or left of your search-word. Click the bar at the top to start a sort. December, 2015. Page 165
181 166 Concord The Columns These show the details for each entry: the entry number, the concordance line, set, tag, word- position (e.g. 1st word in the text is 1), paragraph and sentence position, source text file name , and how far into the file it comes (as a percentage). See below for an explanation of the purple blobs . The statistics are computed automatically based on the language settings. Set This is where you can classify the entries yourself, using any letter, into user-defined 168 categories . Supposing you want to sort out verb uses from noun uses, you can press V or N. To type more (eg. "Noun"), double-click the entry in the set column and type what you 159 want. If you have more than one search-word , you will find the Set column filled with the search-word for each entry. To clear the current entry, you can type the number 0. To clear the whole Set column, choose Edit | Clear Set column. Tag 199 . This column shows the tag context More context? Stretching the display to see more You can pull the concordance display to widen its column. Just place the mouse cursor on the you can pull bar between one column and another; when the cursor changes shape the whole column. Stretch one line to see more context The same applies to each individual row: place the mouse cursor between one row and another in the grey numbered area, and drag. (F8) to "grow" all the rows, or (Ctrl+F8) to shrink them. Or press Or press the numeric key-pad 8 to grow the current line as shown below. (Use numeric key-pad 2 to shrink it.) December, 2015. Page 166
182 167 WordSmith Tools Manual Viewing the original text-file ( if it is still on the disk where it was when the concordance was originally created) Double-click the concordance column, and the source text window will load the file and 116 . highlight the search word Or double-click the filename column, it will open in Notepad for editing. Other things you may wonder about Weird purple marks In the screenshot you will see purple marks where any column is not wide enough to show all the data. The reason is that numbers are often not fully visible and you might otherwise get the Word # 4,569 wrong impression. For example in the concordance below, the column shows 14,569 but the true number might be . Pull the column wider and the purple lines disappear. Status bar 449 The status bar panels show the number of entries (1,000 in the "stretch one line" screenshot above) · whether we're in "Set" or "Edit" mode; · · the current concordance line from its start. See also: 208 your concordance lines Re-sorting 171 searches Follow-up 168 User-defined categories 220 Altering the View 168 the search-word Blanking out December, 2015. Page 167
183 168 Concord Padding the search-word with spaces (use the search-word padding menu item to put a space on each side of the search-word) 179 (words in the neighbourhood of the search-word) Collocation 191 Plot (plots where the search-word came in the texts) 175 (groups of words in your concordance) Clusters 218 Text segments in Concord 206 Editing the concordance 126 Time-lines 206 Zapping entries 211 Saving and printing 127 Window Management blanking 7.6 In a concordance, to blank out the search-words with asterisks, just press the spacebar (or ). Press it again to restore them. choose View | Blank ed out The point of it... and give A blanked-out concordance is useful when you want to create an exercise. This one has put mingled: ... could not ********** me the time ... ... Rosemary, ********** me another ... ... would not ********** much for that ... ... could not ********** up with him ... ... so you'll ********** him a present ... ... will soon ********** up smoking ... ... he should ********** it over here ... Concord will give equal space to the blanks so that the size of the blank doesn't give the game away. 222 See also : Other main Controller settings for Concord Category or Set 7.7 set column categories 7.7.1 The point of it... You may want to classify entries in your own way, e.g. separating adjectival uses from nominal ones, or sorting according to different meanings. December, 2015. Page 168
184 169 WordSmith Tools Manual Here the user has used P where may has to do with probability and M if it's a month. In addition, 149 some items have been labelled in more detail. You may wish also to modify your original texts to include this annotation work you've done. How to do it 428 mode is on Set you will If you simply press a letter or number key while the edit v. set v. type-in get the concordance line marked with that letter or number in the Set column. 208 You can sort the concordance lines using these categories, simply by clicking on the header Set, which will have a small triangle showing it's sorted. 112 them, then choose To enter the same value for various rows, first select the rows or mark Set column | Edit December, 2015. Page 169
185 170 Concord then type in a suitable value. Colours If you want to type something longer and optionally in a specific colour, double-click the set column and you'll get a chance to type more. Here the word permission has been typed and the colour 79 has been dragged onto the box. Clearing the Set column December, 2015. Page 170
186 171 WordSmith Tools Manual To correct a mistake, press the zero key; that will remove any text and colour from the selected 168 entry. (If you press the spacebar you will get blanking .) 51 428 149 mode. , edit v. type-in See also : Colour categories , modify your source texts colour categories 7.7.2 The point of it... The idea is to follow up a large concordance by breaking it down into specific sub-sections, so one can see how many of each sub-type are found in the whole list. Example The screen-shot below came from a concordance of beautiful in Charles Dickens: There are 774 lines. Looking through them, it became apparent that Dickens was fond of the beautiful creature , but how many are of beautiful and collocations beautiful face so.. beautiful a creature or similar (such as creature in line 1) and what proportion of the lines is that? How to do it December, 2015. Page 171
187 172 Concord Choose Compute | Colour Categories in the menu. which opens up the colour categories box: December, 2015. Page 172
188 173 WordSmith Tools Manual Here we have completed the spaces so as to get cases of beautiful ... with creature up to 4 words away to the right, and chosen December, 2015. Page 173
189 174 Concord to colour yellow any which meet this condition. On pressing OK we find out there are 16, representing just over 2% of the lines. and looking at the concordance the first line is now marked: Where are the other 15? To find them, simply sort on the Set column. December, 2015. Page 174
190 175 WordSmith Tools Manual This function applies to word lists and other data too, and is explained in more detail the main colour 51 section. The set column itself can contain characters or words as well as colours, as categories 168 section. explained in the set column 7.8 clusters The point of it... These word clusters help you to see patterns of repeated phraseology in your concordance, especially if you have a concordance with several thousand lines. Naturally, they will usually contain the search-word itself, since they are based on concordance lines. 207 which helps you see patterns is Patterns . Another feature in Concord How it does it... 222 settings for Clusters are computed automatically if this is not disabled in the main Controller Concord ( Concord Settings ) where you will see something like this: December, 2015. Page 175
191 176 Concord where your usual default settings are controlled. "Minimal processing", if checked, means do not compute collocates, clusters, patterns etc. when computing a concordance. (They can always be computed later if the source text files are still present.) Clusters are sought within these limits: default: 5 words left and right of the search word, but up to 25 left and 25 right allowed. The default is for clusters to be three words in length and you can choose how many of each must be found for the results to be worth displaying (say 3 as a minimum frequency). Clusters are calculated using the existing concordance lines. That is, any line which has not been deleted or zapped is used for computing clusters. 188 278 , the idea of "stop at sentence breaks " (there are other As with WordList index clusters alternatives) is that a cluster which spans across two sentences is not likely to make sense. Re-computing clusters ) ( The default clusters computed may not suit, (and you may want to recompute after deleting Compute | Clusters some lines), so you can also choose ) in the Concord menu, so as to ( choose how many words a cluster should have (cluster size 2 to 4 words recommended), and alter the other settings. December, 2015. Page 176
192 177 WordSmith Tools Manual When you press OK, clusters will be computed. In this case we have asked for 3- to 5-word clusters and get results like this: The clusters have been sorted on the Length column so as to bring the 5-word clusters to the top. At the right there is a set of "Related" clusters, and for most of these lines it is impossible to see all of their entries. To solve this problem, double-click any line in the Related column and another window opens. Here is the window showing what clusters are related to the 3-word cluster, the cause of , which is the most frequent cluster in this set: December, 2015. Page 177
193 178 Concord "Related" clusters are those which overlap to some extent with others, so that the cause , etc. The procedure seeks out cases where of overlaps with devoted to the cause of the whole of a cluster is found within another cluster. 448 278 128 See also: general information on clusters , WordList Clusters , Word Clouds . December, 2015. Page 178
194 179 WordSmith Tools Manual Collocation 7.9 what is collocation? 7.9.1 What's a "collocate"? Collocates are the words which occur in the neighbourhood of your search word. Collocates of might include post, stamp, envelope , etc. However, very common words like the will also letter letter collocate with . and "colligation"? Linkages between neighbouring words which involve grammatical items are often referred to as colligation rely is typically followed by a preposition in English is a colligational fact. . That The point of it... The point of all this is to work out characteristic lexical patterns by finding out which "friends" words typically hang out with. It can be hard to see overall trends in your concordance lines, especially if there are lots of them. By examining collocations in this way you can see common lexical and grammatical patterns of co-occurrence. Options 226 minimal processing You may compute a concordance with or without collocates ( ): without is slightly quicker and will take up less room on your hard disk. The default is to compute with collocates. 180 The number of collocates stored will depend on the collocation horizons . You can re-compute collocates after editing your concordance. 92 120 or stop list . If you want to filter your collocate list, use a match list 189 a collocate list in a variety of ways. Re-sort 289 between the word and the search-word which the You can see the strength of relationship concordance was based on. 181 after the concordance has been computed. Collocates can be viewed Technical Note 417 on collocation has never distinguished very satisfactorily between collocates T he literature which we think of as "associated" with a word (letter - stamp) on the one hand, and on the other, the words which do actually co-occur with the word (letter - my, this, a, etc.). We could call th e first type "coherence collocates" and the second "neighbourhood collocates" or "horizon collocates". It has been suggested that to detect coherence collocates is very tricky, as once we start looking beyond a horizon of about 4 or 5 words on either side, we get so many words that there is more noise than signal in the system. 241 ou to study Associates , which are a pointer to "coherence collocates". allows y KeyWords and Concord allow you also to Concord will supply "neighbourhood collocates". WordList 289 . study relationships between words 187 181 180 See also: collocation display , collocation settings , collocation relationship , relationships 289 . between words display December, 2015. Page 179
196 181 WordSmith Tools Manual Full lemma processing, case sensitive 270 entries, or it is a case-sensitive These are only relevant if your word list has any lemmatised word list and you wish processing to respect case-sensitivity. Relation statistic Choose which type of relation you wish to compute. The default is Specific Mutual Information but in the screenshot Z score has been chosen. Column for relation The default is "Total". If you choose Total you're computing the relationship across the current 180 collocation horizons set. If you prefer to examine the relationship at only one position instead, you may: 289 181 179 , Collocate display See also: Collocation , Mutual Information collocates display 7.9.4 Display The collocation display initially shows the collocates in frequency order. Beside each word and the search-word which the concordance was based on, you'll see the December, 2015. Page 181
197 182 Concord 289 strength of relationship between the two (or 0.000 if it hasn't yet been computed). Then, the total number of times it co-occurred with the search word in your concordance, and a total for Left and Right of the search-word. Then a detailed break-down, showing how many times it cropped up 5 words to the left, 4 words to the left, and so on up to 5 words to the right. The centre position (where the search word came) is shown with an asterisk. 180 The number of words to left and right depends . on the collocation horizons The numbers are: the total number of times the word was found in the neighbourhood of the search word the total number of times it came to the left of the search-word the total number of times it came to the right of the search-word a set of individual frequencies to the left of the search word (5L, i.e. 5 words to the left, 4L .. 1L) a Centre column, representing the search-word a set of individual frequencies to the right of the search word (1R, 2R, etc.) The number of columns will depend on the collocation word horizons. With 5,5 you'll get five columns to the left and 5 to the right of the search word. So you can see exactly how many times each word was found in the general neighbourhood of the search word and how many times it was found exactly 1 word to the left or 4 words to the right, for example. 60 (default= red ) . In the The most frequent will be signalled in most frequent collocate colour comes 44 times in total but of these are in position L1. differences 39 screenshot below, The screenshot above shows collocation results for a concordance of BETWEEN/AMONG sorted by column, where items like differentiate, difference the Relation etc. are found to be between . Further down the listing, some links concerning among most strongly related to ( growing, refugees ) are to be seen. December, 2015. Page 182
198 183 WordSmith Tools Manual 189 ) and you can recalculate the collocates ( ( The frequency display can be re-sorted ) if you 180 129 zap . entries from the concordance or change the horizons 186 in your concordance display. You can also highlight any given collocate 183 180 179 128 , Collocation Relationship See also: Word Clouds , Collocation , Collocates and Lemmas 289 , Mutual Information 7.9.5 collocates and lemmas 274 was used and lemma search specified, with a concordance on In the following case a lemma list : abandon the word with these results showing which form of the lemma was used in the Set column. December, 2015. Page 183
199 184 Concord ABANDON In the collocate window below, the red line in row 1 indicates that the 140 cases of include other forms such as 78 cases of ABANDONED and 19 of (greyed out below). ABANDONING The red mark by BE (row 5) shows that this row gives collocation numbers covering all forms of BE are lemmatised in this screenshot. such as WAS, WERE etc. Similarly , HAVE and A Thus, for your search-word and its variants you can see detailed frequencies, but its collocates, though they do get lemmatised, do not show you the variant forms or any specific frequencies. 7.9.6 collocate follow The point of it The idea (from Paul Raper) is to be able to follow up a collocate by requesting a new concordance based on it, in the same text files as selected for the collocate. This aids exploration of related words. How to do it . Select a word of interest, such Here is an example, where there is a collocate list relating to BEERS December, 2015. Page 184
200 185 WordSmith Tools Manual as KEG, menu. and select Follow collocate in the Compute WordSmith starts up a new Concord window with a search on KEG. December, 2015. Page 185
201 186 Concord The search is carried out on the most recently selected text files (selected using the file-choose 44 window or by reading in a saved concordance). 7.9.7 collocate highlighting in concordance The point of it... The idea is to be able to see a selected collocate highlighted in the concordance. In this example, the texts were Shakespeare plays and search word was love . One of the collocates is know , occurring a total of 50 times, with the most frequent at position 4 words to the left of love . Double-clicking 14 in the L4 column to the right of know , we see this in the concordance: December, 2015. Page 186
202 187 WordSmith Tools Manual We have brought to the top of the concordance those lines which contain in position L4. know How to do it In a collocates window or a patterns window, simply double-click the item you wish to highlight. Or select it and choose View | Highlight selected collocate . In the collocates window, if you click what you get the Word all instances of the word column or the Total column Total Left those to the left (33 in the case of know above) Total Right those to the right (17) otherwise those in that column only To get rid 208 Re-sor t in a different way or choose the menu item View | Refresh . 7.9.8 collocate settings 4 settings, in the main WordSmith To set collocation horizons and other Concord Controller menu at the top, choose Concord Settings . my in the concordance line will be treated like My ). Collocates are computed case-insensitively (so 120 THE to be included, use a stop-list . If you don't want certain collocates such as You can lemmatise (join related forms like SPEAK -> SPEAKS, SPOKE, SPOKEN ) using a December, 2015. Page 187
203 188 Concord 274 . lemma list file Minimum Specifications The minimum length is 1, and minimum frequency is 1 (default is 10). You can specify here how frequently it must have appeared in the neighbourhood of the Search Word. Words which only come once or twice are less likely to be informative. So specifying 5 will only show a collocate which comes 5 or more times in the neighbouring context. Similarly, you can specify how long a collocate must be for it to be stored in memory, e.g. 3 letters or more would be 3. Horizons you specify how many words to left and right of the Search Word are to be included in the Here collocation search: the size of the "neighbourhood" referred to above. The maximum is 25 left and 25 right. Results will later show you these in separate columns so you can examine exactly how many times a given collocate cropped up say 3 words to the left of your Search Word. 60 The m ost frequent will be signalled in the most frequent collocate colour (default= ). red Breaks These are which you will see in the bottom right corner of the screen visible in the Controller Concord 222 . Settings When the collocates are computed, if the setting is to stop at sentence breaks, collocates will be counted within the above horizons but taking sentence breaks into account. For example, if a concordance line contains source, per pointing integration times, respectively. However, when we compared these two maps , however and the search-word is only when we compared these two will be used for collocates because there is a sentence break to the left of the search word. If the setting is "stop at punctuation", then nothing will come into the collocate list for that line (because there is a more major break than punctuation to the left of it, and no word to the right of the search- word before a punctuation symbol. stop at end of text: end of text is by default assumed to be the end of the text file. stop at heading or section : this works by recognising ends of heading or section which you can specify in the text format box (language settings): December, 2015. Page 188
204 189 WordSmith Tools Manual 7.9.9 re-sorting: collocates The point of it... is to home in, for example, on the ones in L1 or R1 position. To find sub-patterns of collocation, so as to more fully understand the company your search-word keeps. Here the collocates of COULD in some Jane Austen texts show how negatives crop up a lot in R1 position. How to do it... just press the header The frequency-ordered collocation display can be re-sorted to reveal the frequencies sorted by their December, 2015. Page 189
205 190 Concord total frequencies overall (the default), by the left or right frequency total, or by any individual frequency position. Just press the header of a column to sort it. Press again to toggle the sort between ascending and descending. 186 , as in the You can also get the concordance lines sorted so as to highlight specific collocates case of the 70 cases of NEVER in R1 position in the screenshot. Word Clouds 128 You can also get a word cloud of your sorted column. In the screenshot below, a concordance 120 on cash generated these R1 collocates (with most function words eliminated using a stoplist ): and these data fed straight into a word cloud. December, 2015. Page 190
206 191 WordSmith Tools Manual In the word cloud, the mouse hovered over the word accounting so the details of that word are shown to the right. 181 180 128 179 , Patterns See also: Collocation , Collocation Display , Collation Horizons , Word Clouds 207 . dispersion plot 7.10 The point of it... This shows where the search word occurs in the file which the current entry belongs to. That way you can see where mention is made most of your search word in each file. Another case where the 450 aim is to promote the noticing of linguistic patterning . What you see The plot shows: source text file-name File number of words in the source text Words number of occurrences of the search-word Hits December, 2015. Page 191
207 192 Concord per 1,000 how many occurrences per 1,000 words 446 Dispersion the plot dispersion value Plot a plot showing where they cropped up, where the left edge of the plot represents the beginning of the text file ("Once upon a time" for example) and the right edge is at the end ("happily ever after". Though not in the case of Romeo and Juliet.). Here we see a plot of "O" and another of "AH" from the play Romeo and Juliet. They are on separate lines because there were 2 search-words. There are more "O" exclamations than "AH"s. As the status bar says, you can get the word numbers for the plot by double-clicking the plot area: Using View | Ruler , you can switch on a "ruler" splitting the display into segments. The plot below is of one search-word ( beautiful ) in lots of texts. December, 2015. Page 192
208 193 WordSmith Tools Manual The status-bar gives details of the highlighted text. Multiple Search-words or Texts If there are 2 or more search-words or texts, you will see something like this: where the File column supplies the file-name and the search-word in that order. If you want it with the search-word first, go to the Concord settings in the Controller, What you see, and click here: and re-sort the File list: December, 2015. Page 193
209 194 Concord Double-click to see the source text Just double-click in the File column: Uniform view There are two ways of viewing the plot, the default, where all plotting rectangles are the same length, or Uniform Plot (where the plot rectangles reflect the original file size -- the biggest file is longest). Change this in the View menu at the top. Here is the same one with Uniform plot. The blue edge at the right reflects the file size in each case. December, 2015. Page 194
210 195 WordSmith Tools Manual If you don't see as many marks as the number of hits, that'll be because the hits came too close together for the amount of screen space in proportion to your screen resolution. You can stretch the 102 plot by dragging the top right edge of it. You can export the plot using Save As and can get your 102 spreadsheet to make graphs etc, as explained here . Each plot window is dependent on the concordance from which it was derived. If you close the the plot. There's no Save option for the Print original concordance down, it will disappear. You can 422 Copy to the plot alone but you can of course save the concordance itself. You can clipboard (Ctrl+C) and then put it into a word processor as a graphic, using Paste Special. Advanced plots When you first compute a concordance, the plot will assume you want a dispersion plot of each text file on a separate line and each different search-word on a separate line as seen above. If you menu item have more than one text file or search-word, when you choose the Compute | Plot afterwards, you will get a chance to merge your plots and omit some text files or search-words . A first view of the plot settings may resemble this. All the files have by default been sorted into separate sets and so have all the search-words. The red colour indicates files or search-words which have been included in each list of sets at the right. December, 2015. Page 195
211 196 Concord Now if you Clear them, you can either select and drag or select and press the central button to get your preferred selections. (The button showing a green funnel will put all into one set, the other one will use one set for each, by the way.) December, 2015. Page 196
212 197 WordSmith Tools Manual Here is a set of preferences with lots of files and two search-words: giving results like this: December, 2015. Page 197
213 198 Concord 446 60 . , plot dispersion value See also: plot and ruler colours 7.11 concordancing on tags The point of it... Suppose you're interested in identifying structures of a certain type (as opposed to a given word or phrase), for example sequences of . You can type in the tags you want to Noun+Noun+Noun concordance on (with or without any words). How to do it... In Concord's search-word box, type in the tags you are interested in. Or define your tags in a tag-file 141 . Examples as a singular noun (as opposed to as a verb) table table finds will find any sequence of two singular common nouns in the BNC Sampler. * * finds table if your text is tagged with < and > symbols, or if you have Note that table and [ [w NN1]table . specified ] as tag symbols, it will find 159 . There are some more examples under Search word or phrase 141 It doesn't matter whether you are using a tag file or not, since WordSmith will identify your tags automatically. (But not by magic: of course you do need to use previously tagged text to use this function.) In example 2, the asterisks are because in the BNC, the tags come immediately before the word 427 they refer to: if you forgot the asterisk, Concord would assume you wanted a tag with a separator on either side. Are you concordancing on tags? If you are asked this and your search-word or phrase includes tags, answer "Yes" to this question. If not, your search word will get " " inserted around each < or > symbol in it, as explained under December, 2015. Page 198
214 199 WordSmith Tools Manual 159 . Search Word Syntax Case Sensitivity Tags are only case sensitive if your search-word or phrase is. Search words aren't (by default). So in example 1, you will retrieve and TABLE if used as nouns (but nothing at all if no Table and table tags are in your source texts). Hide Tags? 201 After you have generated a concordance you may wish to hide the mark-up. See the View menu for this. 131 132 199 , Search See also: Overview of Tags , Showing Nearest Tags in Concord , Handling Tags 159 145 390 134 , Using Tags as Text Selectors word or phrase , Viewing the Tags , Types of Tag nearest tag 7.11.1 141 , which teaches Concord allows you to see the nearest tag, if you have specified a tag file WordSmith Tools what your preferred tags are. Then, with a concordance on screen, you'll see the tag in one of the columns of the concordance window. The point of it... The advantage is that you can see how your concordance search-word relates to marked-up text. , you can For example, if you've tagged all the speech by Robert as [Rob] and Mary as [Mary] quickly see in any concordance involving conversation between Mary, Robert and others, which ones came from each of them. and Alternatively, you might mark up your text as , : Nearest Tag will show each line like this: 1 ... could not give me the time ... 2 ... Rosemary, give me another ... 3 ... wanted to give her the help ... 4 ... would not give much for that ... 141 To mark up text like this, make up a tag file with your sections and label them as sections, as in these examples: /description "section" /description "section"
/description "section"
if you want to identify the speech of all characters in a play, and have a list of the characters, or, and they are marked up appropriately in the text file, something like this: /description "section" /description "section" December, 2015. Page 199
215 200 Concord /description "section" In cases using "section", Nearest Tag will find the section, however remote in the text file it may be. Without the keyword "section", Nearest Tag shows only the current context within the span of 225 text saved with each concordance line. 208 You can sort on the nearest tags. In the shot below, a concordance of such has been text. Some of the cases of such are tagged < PRP> ( such as ) and others computed using BNC are . The Tag column shows the nearest tag, and the whole list has been sorted using that column. 132 have If you can't see any tags using this procedure, it is probably because the Tags to Ignore the same format. For example, if Tags to Ignore has <*>, any tags such as , <quote>, etc. 141 . If so, specify the tag will be cut out of the concordance unless you specify them in a tag file file and run the concordance again. You can also display tags in colour, or even hide the tags -- yet still colour the tagged word. Here this in the BNC text with the tags in colour: is a concordance of December, 2015. Page 200</p> <p><span class="badge badge-info text-white mr-2">216</span> 201 WordSmith Tools Manual and here is a view showing the same data, with View | Hide Tags selected. December, 2015. Page 201</p> <p><span class="badge badge-info text-white mr-2">217</span> 202 Concord The tags themselves are no longer visible, and only 6 types of tag have been chosen to be viewed in colour. 132 141 131 See also: Guide to handling the BNC , Making a Tag File , Overview of Tags , Handling Tags 390 134 145 131 , Using Tags as Text Selectors , Tagged Texts , Types of Tag , Viewing the Tags 7.12 context word You may restrict a concordance search by specifying a context word which either must or may not be present within a certain number of words of your search word. as the context word. This hotel* as your search word and For example, you might have book is nearby. will only find book if hotel or hotels as your search word and paper* Or you might have book as an exclusion criterion. This will only not within your Context Search Horizons. book paper if or papers find is Context Search Horizons The context horizons determine how far Concord must look to left and right of the search word 113 is 5,5 (5 to left and 5 to when checking whether the search criteria have been met. The default right of the search word) but this can be set to up to 25 on either side. 0,2 would look only to the December, 2015. Page 202</p> <p><span class="badge badge-info text-white mr-2">218</span> 203 WordSmith Tools Manual right within two words of the search word. In this example the search-word is beautiful and the context word is lady , to be sought either left or right of beautiful . 159 Syntax is like that of the search word or phrase , * means disregard the end of the word and can be placed at either end of your context word. == means case sensitive / separates alternatives. You can specify up to 15 alternatives within an 80-character overall limit. If you want to use *, ? , == , ~ , :\ or / as a character in your search word, put it in double quotes, e.g. "*" December, 2015. Page 203</p> <p><span class="badge badge-info text-white mr-2">219</span> 204 Concord In line 14, the search-word and the context-word are in separate sentences. To avoid this, specify a suitable stop as shown here: December, 2015. Page 204</p> <p><span class="badge badge-info text-white mr-2">220</span> 205 WordSmith Tools Manual and with the same settings you will get results like these: December, 2015. Page 205</p> <p><span class="badge badge-info text-white mr-2">221</span> 206 Concord If you have specified a context word, you can re-sort on it. Also, the context words will be in their 60 own special colour . Note: the search only takes place within the current concordance line with the number of 211 . That is, if for example you choose search horizons characters defined as characters to save 25L and 25R, but only 1000 characters are saved in each line, there might not be 25 words on either side of the search-word to examine when seeking the context word or phrase if there was extensive mark-up as well. 7.13 editing concordances The point of it... You may well find you have got some entries which weren't what you expected. Suppose you have December, 2015. Page 206</p> <p><span class="badge badge-info text-white mr-2">222</span> 207 WordSmith Tools Manual SHRIMP*/PRAWN* Shrimpton in the listing. It's done a search for -- you may find a mention of Del easy to clean up the listing by simply pressing on each unwanted line. (Do a sort on the search word first so as to get all the Shrimptons next to each other.) The line will turn a light grey colour. Pressing Ins will restore it, if you make a mistake. To delete or restore ALL the lines from the current line to the bottom, press the grey - key or the grey + key by the numeric keypad. When 129 ) to the deleted you have finished marking unwanted lines, you can choose (Ctrl+Z or zap lines. 168 If you're a teacher you may want to blank out the search words: to do so, press the spacebar. Pressing the spacebar again will restore it, so don't worry! 7.13.1 remove duplicates The problem Sometimes one finds that text files contain duplicate sections, either because the corpus has become corrupted through being copied numerous times onto different file-stores or because they were not edited effectively, e.g. a newspaper has several different editions in the same file. The result can sometimes be that you get a number of repeated concordance lines. Solution Concord goes through your concordance lines and if it Edit |Remove Duplicates , If you choose 225 finds any two where the stored concordance lines are identical, regardless of the filename, 225 date etc. it will mark one of these for deletion. That is, it checks all the "characters to save " to see whether the two lines are identical. If you set this to 150 or so it is highly unlikely that false duplicates will be identified, since every single character, comma, space etc. would have to match. Check before you zap... At the end it will sort all the lines so you can see which ones match each other before you 129 decide finally to zap the ones you really don't want. 7.14 patterns When you have a collocation window open, one of the tab windows shows "Patterns". This will show the collocates (words adjacent to the search word), organised in terms of frequency within each column. That is, the top word in each column is the word most frequently found in that position. The second word is the second most frequent. December, 2015. Page 207</p> <p><span class="badge badge-info text-white mr-2">223</span> 208 Concord In R1 position (one word to the right of the search-word love ) there seem to be both intimate ( thee ) and formal ( you ) pronouns associated with love in Shakespeare. And looking at L1 position it seems that speakers talk more of their love for another than of another's love for them. The minimum frequency and length for one of the words to be shown at all, is the minimum 187 . frequency/length for collocates The point of it... The effect is to make the most frequent items in the neighbourhood of the search word "float up" to the top. Like collocation, this helps you to see lexical patterns in the concordance. 186 You can also highlight any given pattern collocate in your concordance display. re-sorting 7.15 How to do it... Sorting can be done simply by pressing the top row of any list. Or by pressing F6. Or by choosing the menu option. The point of it... The point of re-sorting is to find characteristic lexical patterns. It can be hard to see overall trends in your concordance lines, especially if there are lots of them. By sorting them you can separate out multiple search words and examine the immediate context to left and right. For example you may find that most of the entries have "in the" or "in a" or "in my" just before the search word -- sorting by the second word to the left of the search word will make this much clearer. Sorting is by a given number of words to the left or right (L1 [=1 word to the left of the search right], R2, R3, R4, R5), on the search word itself, the context word], L2, L3, L4, L5, R1 [=1 to the 199 , the distance to the nearest tag, a set category word (if one was specified), the nearest tag 168 of your own choice, or original file order (file). Main Sort December, 2015. Page 208</p> <p><span class="badge badge-info text-white mr-2">224</span> 209 WordSmith Tools Manual Th e listing can be sorted by three criteria at once. A Main Sort on Left 1 (L1) will sort the entries according to the alphabetical order of the word immediately to the left of the search word. A second sort (Sort 2) on R2 would re-order the listing by tie-breaking, that is: only where the L1 words (immediately to the left of the search word) matched exactly, and would place these in alphabetical order of the words 2 to the right of the search word. For very large concordances you may find the third sort (Sort 3) useful: this is an extra tie-breaker in cases where the second sort matches. For many purposes tie-breaking is unnecessary, and will be ignored if the "activated" box is not checked. default sort 220 This is set in the main controller settings . 168 sorting by set (user-defined categories ) You can also sort by set, if you have chosen to classify the concordance lines according to your A to Z or a to z or longer strings. The sort will put the classified own scheme, using letters from 199 See Nearest Tag details of lines first, in category order, followed by any unclassified lines. for 51 for a more sophisticated way of using the Set column. sorting by tags. See colour categories other sorts As the screenshot below shows, you can also sort by a number of other criteria, most of these accessible simply by clicking on their column header. December, 2015. Page 209</p> <p><span class="badge badge-info text-white mr-2">225</span> 210 Concord The "contextual frequency" sort means sorting on the average ranking frequency of all the words in each concordance line which don't begin with a capital letter. For this you will be asked to specify your reference corpus wordlist. The result will be to sort those lines which contain "easy" (highly frequent) words at the top of the list. All By default you sort all the lines; you may however type in for example 5-49 to sort those lines only. Ascending If this box is checked, sort order is from A to Z , otherwise it's from Z to A . 81 252 315 , KeyWords sort , Choosing Language See also: WordList sort 7.15.1 re-sorting: dispersion plot This automatically re-sorts the dispersion plot, rotating through these options: (by file-name) alphabetically in frequency order (in terms of hits per 1,000 words of running text) by first occurrence in the source text(s): text order : the gap between first and last occurrence in the source text. by range 191 see also: Dispersion Plot December, 2015. Page 210</p> <p><span class="badge badge-info text-white mr-2">226</span> 211 WordSmith Tools Manual 7.16 saving and printing You can save the concordance (and its collocates & other dependent results if these were stored when the concordance was generated) either as a Text File (e.g. for importing into a word processor) or as a file of results which you can subsequently Open (in the main menu at the top) to view again at a later date. When you leave Concord you'll be prompted to save if you haven't already done so. Saving a concordance allows you to return later and review collocates, dispersion plots, clusters. 80 You can Print using the Windows printer attached to your system. You will get a chance to specify the number of pages to print. The font will approximate the one you can see on your screen. If you use a colour printer or one with various shades of grey, the screen colours will be copied to your printer. If it is a black-and-white printer, coloured items will come in italics if your printer can do italics. prints as much of your concordance plus associated details as your printing paper Concord 97 settings allow, the edges being shown in Print Preview . If you choose to save as text using , and if you have (optionally) marked 222 out the search-word and/or context word in the Controller like this whatever you have put will get inserted in the .txt file. In the above example, doing a search through 23 Dickens texts for last night with drive as the context word, a concordance looking like this December, 2015. Page 211</p> <p><span class="badge badge-info text-white mr-2">227</span> 212 Concord produced this in the txt file: rry, tell him yourself to give him no restorative but air, and to remember my words of last night, and his promise of last night, and <CW>drive away!" The Spy withdrew, and Carton seated himself at the table, resting his forehead on his h 422 See also : using the clipboard to get a concordance into Word or another application. 7.17 sounds & video The point of it Suppose you do a concordance of "elephant" and want to hear how the word is actually spoken in context. Is the last vowel a schwa? Does the second vowel sound like "i" or "e" or "u" or a schwa? How to do it... If you have defined tags which refer to multimedia files, and if there are any such tags in the "tag- context" of a given concordance line, you can hear or see the source multimedia. The tag will be 147 to identify the file needed, if necessary downloading it from a web address, and then parsed played. In this screenshot we see a concordance where there is a tag inserted periodically in the text file. To File | Play media file , or double-click the play the media file,choose column . Tag December, 2015. Page 212</p> <p><span class="badge badge-info text-white mr-2">228</span> 213 WordSmith Tools Manual Video files can be played if the free VLC Media Player is installed (see http://www.vlcapp.com/) . The next screenshot below shows a concordance line with, in the Nearest Tag column, the mark-up saying that the source text and the video file have the same file-name (except that the latter ends .AVI and the former .TXT). A double-click on the Tag (yellow highlighted cell) brought up the video screen you can see below, and that has now played to the tenth second, then paused. You can see in the case of this particular video that there is a sub-title with the same words that are in the concordance above (though there is no guarantee you will see sub-titles for all videos). If you build up a collection of TED talks like these where the same video in English has transcripts in several languages, you can get to see the different translations: December, 2015. Page 213</p> <p><span class="badge badge-info text-white mr-2">229</span> 214 Concord by choosing View | Show related txts in the menu. 132 215 147 , Handling Tags , , Obtaining Sound and Video files See also: Multi-media Tag syntax December, 2015. Page 214</p> <p><span class="badge badge-info text-white mr-2">230</span> 215 WordSmith Tools Manual 199 198 141 Making a Tag File , Types of Tag , Showing Nearest Tags in Concord , Tag Concordancing 311 134 390 145 , Tags in WordList , Viewing the Tags , Using Tags as Text Selectors obtaining sound and video files 7.17.1 Sources of sound and video files WordSmith does not provide or include corpora. However, there are specialised corpora such as , ICE and then there are publicly available sources such as the TED Talks. You NECTE , MICASE are expected to respect copyright provisions in all cases. There is a lot of useful advice at where you will find transcripts. TED Open Translation Project These text files in English (.en), Spanish (.es), Italian (.it) and Japanese (.ja) were downloaded from 374 there and later converted using the Text Converter 147 If you wish to use a transcript and sound file format which is incompatible with the syntax 425 described here, please contact us. 7.18 summary statistics The idea is to be able to break down your concordance data. For example, you've just done a which has given you lots of singulars and lots of plurals and you consequence? concordance of want to know how many there are of each. Choose menu. Compute in the Summary Statistics December, 2015. Page 215</p> <p><span class="badge badge-info text-white mr-2">231</span> 216 Concord The searches window will at first contain a copy of what you typed in when you created the concordance. To distinguish between singular and plural, change that to and press Count; assuming that search column has Concordance selected, you will get something like this: December, 2015. Page 216</p> <p><span class="badge badge-info text-white mr-2">232</span> 217 WordSmith Tools Manual Advanced Summary Statistics features Breakdown The idea here is to be able to break down your results further, using another category in your existing concordance data, such as the files the data came from. In our example, we might , how many of the text files contained want to know for consequence and consequences each of the two forms. To generate the breakdown, activate it and choose the category you need. The results window will now show something like this came 20 times in 20 different files, the first where it is clear that the singular consequence : being file A3A.TXT . Further down you will find the results for consequences December, 2015. Page 217</p> <p><span class="badge badge-info text-white mr-2">233</span> 218 Concord which appeared 103 times in 74 files, and that in the first of these, A1E.TXT , it came twice. Cumulative column 304 see the explanation for WordList Load button 66 . see the explanation for count data frequencies text segments in Concord 7.19 A concordance line brings with it information about which segment of the text it was found in. In the screenshot below, a concordance on year was carried out; the listing has been sorted by year is found as the 3rd word of a heading. The advantage Heading Position -- in the top 2 lines, of this is that it is possible to identify search-words occurring near sentence starts, near the beginning of sections, of headings, of paragraphs. December, 2015. Page 218</p> <p><span class="badge badge-info text-white mr-2">234</span> 219 WordSmith Tools Manual 220 You can toggle the numbers between raw numbers and percentages . 146 See also: Start and end of text segments . December, 2015. Page 219</p> <p><span class="badge badge-info text-white mr-2">235</span> 220 Concord 7.20 viewing options Access these options in the main Controller, via Concord | What you see . Sort preferences ill sort a new concordance by the word to the left (L1), but you can set this to By default, Concord w 208 different values if you like. For further details, see Sorting a Concordance . Show collocate zero frequencies This toggles whether 0 or a blank (the default) is shown if a collocate frequency is zero. or December, 2015. Page 220</p> <p><span class="badge badge-info text-white mr-2">236</span> 221 WordSmith Tools Manual Concordance View You can choose different ways of seeing the data, and a whole set of choices as to what columns you want to display for each new concordance. You can re-instate any later if you wish by 87 . changing the Layout show full filename and path = sometimes you need to see the whole path but usually the filename alone will suffice. cut redundant spaces = remove any double spaces 425 show sentence only = show the context only up to its left and right sentence boundaries tag string only = show only context within two tag_string_only tags 218 show raw numbers = show the raw data instead of percentages e.g. for sentence position 168 hide search-word = blank it out eg. to make a guess-the-word exercise pad search-word with spaces = insert a space to left and right of the search-word so it stands out better 141 hide undefined tags = hide those not defined in your tag file hide tag file tags = hide all tags including undefined ones hide words = show only the tags Some of the options are visible here: December, 2015. Page 221</p> <p><span class="badge badge-info text-white mr-2">237</span> 222 Concord for example the sub-set visible shows an opportunity to blank out the search-word, to pad it with a space left & right, to shift the search-word left or right. 222 222 199 168 222 , blanking out See also : Controller What you get , showing nearest tags choices 166 . the search-word, viewing more context, growing/shrinking concordance lines WordSmith controller: Concord: settings 7.21 4 These are found in the main Controller marked Concord. 180 affect other Tools. -- may choices -- e.g. collocation horizons This is because some of the December, 2015. Page 222</p> <p><span class="badge badge-info text-white mr-2">238</span> 223 WordSmith Tools Manual When you have computed a concordance, the Concord button will have a red number (showing how many Concord windows are in use) and at the bottom of the screen you will see an icon ( ). Click that to see the list of files and their features. WHAT YOU GET and WHAT YOU SEE What you see There are 2 tabs for settings affecting What you get in the concordance and in the 220 display. There is a screenshot at Concord: viewing options showing the options under What you see . WHAT YOU GET Search Settings The search settings button lets you choose these settings: December, 2015. Page 223</p> <p><span class="badge badge-info text-white mr-2">239</span> 224 Concord Entries Wanted The maximum is more than 2 billion lines. This feature is useful if you're doing a number of searches and want, say, 100 examples of each. The 100 entries will be the first 100 found in the texts you have selected. If you search for more than 1 search-word (eg. book/ paperback ) , you will get 100 of book and 100 of paperback . entries near each other allows you to force Concord to skip hits which are too close to each other. If for example you set this to 0 or 1 and your text contains ... a lovely lovely day lovely . The default here is - then you will only get the first of these cases if searching for 1. (If you set it to 0 then you are only allowing one hit within any given word.) : this feature allows you to randomise the search. Here randomised entries Concord goes through the text files and gets the 100 entries by giving each hit a random chance of being selected. To get 100 entries Concord will have to have found around 450-550 hits with the settings shown below. You can set the randomiser anywhere from 1 in 2 to 1 in 1,000. 70 See also: reduce to N . December, 2015. Page 224</p> <p><span class="badge badge-info text-white mr-2">240</span> 225 WordSmith Tools Manual auto remove duplicates : removes any lines where the whole concordance entry matches another. (This can happen if you have a corpus where news stories get re-published in different editions by different newspapers.) Ignore punctuation between words : this allows a search for BY ITSELF to succeed where the text contains ... went by, itself Characters to save Here is where you set how many characters in a concordance line will be stored as text as the concordance is generated. The default and minimum is 1000. This number of characters will be saved when you save your results, so even if you subsequently delete the source text file you can still see some context. If you grow the lines more text will be read in (and 422 stored) as needed. There are examples here . December, 2015. Page 225</p> <p><span class="badge badge-info text-white mr-2">241</span> 226 Concord Save as text search-word or context-word marker : here you can also 211 . specify markers for your search-word and context-word Collocates Concord will compute collocates as well as the concordance, but you can set By default, 180 it not to if you like ( ). For further details, see Collocate Horizons or Minimal processing 179 Collocation The minimum frequency and length refer to the collocates to be shown in your listing. With the settings above, only collocates which occur at least 5 times and contain at least 1 188 . character will be shown as long as they don't cross sentence boundaries If separate search words is checked and you have multiple search-terms, then you get collocates distinguishing between the different search-terms. If you want them amalgamated, clear this check-box. Collocates relation statistic Mutual Choose between Specific Mutual Information, MI3, Z Score, Log Likelihood. See 289 for examples of how these can differ. Information Display WHAT YOU SEE December, 2015. Page 226</p> <p><span class="badge badge-info text-white mr-2">242</span> 227 WordSmith Tools Manual 220 . The options are explained at Concord: viewing options Columns The list offers all the standard columns: you may uncheck ones you Columns to show/hide normally do not wish to see. This will only affect newly computed KeyWords data: earlier data uses the column visibility, size, colours etc already saved. They can be altered using 87 menu option at any time. the Layout 211 158 187 , Concord Help Contents , Collocation Settings . See also: Concord Saving and Printing December, 2015. Page 227</p> <p><span class="badge badge-info text-white mr-2">243</span> WordSmith Tools Manual KeyWords Section VIII</p> <p><span class="badge badge-info text-white mr-2">244</span> 229 WordSmith Tools Manual 8 KeyWords 8.1 purpose This is a program for identifying the "key" words in one or more texts. Key words are those whose 235 . frequency is unusually high in comparison with some norm. Click here for an example The point of it... Key-words provide a useful way to characterise a text or a genre. Potential applications include: language teaching, forensic linguistics, stylistics, content analysis, text retrieval. The program compares two pre-existing word-lists, which must have been created using the WordList tool. One of these is assumed to be a large word-list which will act as a reference file. The other is the word-list based on one text which you want to study. The aim is to find out which words characterise the text you're most interested in, which is automatically assumed to be the smaller of the two texts chosen. The larger will provide background data for reference comparison. 251 247 237 Key-words and links between them can be plotted , made into a database , and grouped 241 . according to their associates Online step-by-step guide showing how 8.2 index Explanations 229 What is the Keywords program and what's it for? 245 How Key Words are Calculated 230 2-Word list Analysis 252 Key words display 251 Key words plot 249 Key words plot display 247 Plot-Links 39 Batch Analyses 237 Database of Key Key-Words 241 Associates 243 Clumps 437 Limitations Settings and Procedures 234 Calling up a Concordance December, 2015. Page 229</p> <p><span class="badge badge-info text-white mr-2">245</span> 230 KeyWords 232 Choose Word Lists 60 Colours 238 Database 431 Folders 78 Fonts 439 Keyboard Shortcuts 80 Printing 252 Re-sorting 101 Exiting Tips 244 KeyWords advice 127 Window management Definitions 425 General Definitions 236 Key-ness 240 Key key-word 243 Associate 2 See also : WordSmith Main Index 8.3 ordinary two word-list analysis KeyWords analysis. It compares the one text file (or corpus) you're chiefly The usual kind of interested in, with a reference corpus based on a lot of text. In the screenshot below we are deer hunter story interested in the key words of as the reference corpus BNC and we're using to compare with. Choose Word Lists In the dialogue box you will choose 2 files. The text file in the box above and the reference corpus file in the box below. December, 2015. Page 230</p> <p><span class="badge badge-info text-white mr-2">246</span> 231 WordSmith Tools Manual 245 254 See also How Key Words are Calculated , KeyWords Settings December, 2015. Page 231</p> <p><span class="badge badge-info text-white mr-2">247</span> 232 KeyWords 8.4 choosing files Current Text word list In the upper box, choose a word list file. To choose more than 1 word list file, press Control as you click to select non-adjacent lists, or Shift to select a range. This box determines which word-list(s) you're going to find the key words of. Reference Corpus word list 447 List. (This can be set permanently in the The the box below, you choose your Reference Corpus main Controller Settings). No word lists visible If you can't see any word lists in the displays, either change folders until you can, or go back to the WordList tool and make up at least 2 word lists: this procedure requires at least two before it can make a comparison. Swap The text you're studying must be at the top. If you get them wrong, exchange them. December, 2015. Page 232</p> <p><span class="badge badge-info text-white mr-2">248</span> 233 WordSmith Tools Manual Advanced: working with a batch file Click the browse button: and choose the batch .zip file and we are ready to make a batch: that one 2010.zip contains many thousands of word lists. December, 2015. Page 233</p> <p><span class="badge badge-info text-white mr-2">249</span> 234 KeyWords 8.5 concordance and With a key word or a word list list on your screen, you can choose Compute to call up a concordance of the currently selected word(s). The concordance will search for the same word in the original text file that your key word list came from. The point of it... is to see these same words in their original contexts. December, 2015. Page 234</p> <p><span class="badge badge-info text-white mr-2">250</span> 235 WordSmith Tools Manual example of key words 8.6 You have a collection of assorted newspaper articles. You make a word list based on these articles, and see that the most frequent word is the. Among the rather infrequent words in the list hopping , modem, squatter, grateful , etc come examples like . You then take from it a 1,000 word article and make a word list of that. Again, you notice that the most frequent word is the . So far, not much difference. You then get KeyWords to analyse the two word lists. KeyWords reports that the most "key" squatter, police, break age, council, sued, Timson, resisted, community words are: . These "key" words are not the most frequent words (which are those like ) but the words which the are most unusually frequent in the 1,000 word article. Key words usually give a reasonably good clue to what the text is about. Here is an example from the play Othello. 315 See also: word-lists with tags as prefix . 8.7 keyness 8.7.1 p value (Default=0.000001) p value is that used in standard chi-square and other statistical tests. This value ranges from 0 The to 1. A value of .01 suggests a 1% danger of being wrong in claiming a relationship, .05 would give a 5% danger of error. In the social sciences a 5% risk is usually considered acceptable. In the case of key word analyses, where the notion of risk is less important than that of selectivity, you may often wish to set a comparatively low p value threshold such as 0.000001 (one in 1 million) (1E-6 in scientific notation) so as to obtain fewer key words. Or you can set a low "maximum December, 2015. Page 235</p> <p><span class="badge badge-info text-white mr-2">251</span> 236 KeyWords 4 Controller wanted" number in the main . , under KeyWords Settings 245 is used, the computed p value will only be shown if all appropriate If the chi-square procedure statistical requirements are met (all expected values >= 5). 237 425 See also: , choosing a reference corpus Definitions 8.7.2 key-ness definition The term "key word", though it is in common use, is not defined in Linguistics. This program identifies key words on a mechanical basis by comparing patterns of frequency. (A human being, on the other hand, may choose a phrase or a superordinate as a key word.) A word is said to be "key" if it occurs in the text at least as many times as the user has specified as a Minimum a) Frequency b) its frequency in the text when compared with its frequency in a reference corpus is such that 245 the statistical probability as computed by an appropriate procedure is smaller than or equal to 235 a p value specified by the user. positive and negative keyness which is key occurs more often than would be expected by chance in positively A word comparison with the reference corpus. less often than would be expected by chance in key occurs which is word A negatively comparison with the reference corpus. typical key words KeyWords will usually throw up 3 kinds of words as "key". First, there will be proper nouns. Proper nouns are often key in texts, though a text about racing could wrongly identify as key, names of horses which are quite incidental to the story. This can be avoided by specifying a higher Minimum Frequency. Second, there are key words that human beings would recognise. The program is quite good at finding these, and they give a good indication of the text's "aboutness". (All the same, the program does not group synonyms, and a word which only occurs once in a text may sometimes be "key" for a human being. And will not identify key phrases unless you are comparing word- KeyWords 448 word clusters .) lists based on Third, there are high-frequency words like because or shall or already . These would not usually be identified by the reader as key. They may be key indicators more of style than of "aboutness". But the fact that KeyWords identifies such words should prompt you to go back to ), to investigate why such the text, perhaps with Concord (just choose Compute | Concordance words have cropped up with unusual frequencies. 245 240 425 , Definitions , Definition of Key Key-Word See also: How Key Words are Calculated , 254 KeyWords Settings December, 2015. Page 236</p> <p><span class="badge badge-info text-white mr-2">252</span> 237 WordSmith Tools Manual 8.7.3 thinking about keyness Choosing a reference corpus In general the choice does not make a lot of difference if you have a fairly small p value (such as 0.000001). But it may help to think using this analogy. Different reference corpora may give different results. Suppose you have a method for comparing objects and you take a particular apple out of your kitchen to compare using it A) with a lot of apples in the green-grocer's shop B) with all the fruit in the green-grocer's shop C) with a mixture of objects (cars, carpet, notebooks, fruit, elephants etc.) With A) you will get to see the individual characteristics, e.g. perhaps your apple is rather sweeter than most apples. (But you won't see its "apple-ness" because both your apple and all the others in your reference corpus are all apples.) With B) you will see "appleness" (your apple, like all apples but unlike bananas or pineapples, is rather round and has a very thin skin) but might not see that your apple is rather sweet and you won't get at its "fruitiness". With C) you will get at the apple's fruity qualities: it is much sweeter and easier to bite into than cars and notebooks etc. Keyness scores Is there an important difference between a key word with a keyness of 50 and another of 500? Suppose you process a text about a farmer growing 3 crops (wheat, oats and chick-peas) and suffering from 3 problems (rain, wind, drought). If each of these crops is equally important in the text, and each of the 3 problems takes one paragraph each to explain, the human reader may decide that all three crops are equally key and all three problems equally key. But in English these three crop- terms and weather-terms vary enormously in frequency (chick-peas and drought least frequent). WordSmith's KW analysis will necessarily give a higher keyness value to the rarer words. So it is generally unsafe to rely on the order of KWs in a KW list. 8.8 KeyWords database (default file extension .KDB) The point of it... The point of this database is that it will allow you to study the key-words which recur often over a number of files. For example, if you have 500 business reports, each one will have its own key words. These will probably be of two main kinds. There will be key-words which are key in one text but are not December, 2015. Page 237</p> <p><span class="badge badge-info text-white mr-2">253</span> 238 KeyWords generally key (names of the firms and words relating to what they individually produce); and other, more general words (like consultant, profit, employee ) which are typical of business I, you, should etc. come to the top if your text documentation generally. Or you may find that files are ones which are much more interactive than the reference corpus texts. By making up a database, you can sort these out. The ones at the top of the list, when you view them, may be those which are most typical of the genre in some way. We might call the ones at the top "key-key words" and the list is at first ordered in terms of "key key-ness", but those at the bottom will only be key in a few text files. You can of course toggle it into alphabetical order and back again. You can set a minimum number of files that each word must have been found to be key in, using 238 KeyWords Settings | Database . 241 When viewing a database you will be able to investigate the associates of the key key-words. Under Statistics, you will also be able to see details of the key words files which comprise the database (file name and number of key words per file), together with overall statistics on the number of different types and the tokens (the total of all the key-words in the whole database including repeats). 240 238 , Definition of key key-word See also : Creating a database 8.8.1 creating a database To build a key words database, you will need a set of key word lists. For a decent sized database, it is preferable to build it like this: 39 1. Make a batch of word lists. 39 of keyword lists. Set "faster minimal processing" on as in this shot, 2. Use this to make a batch so as to not waste time computing plots etc. December, 2015. Page 238</p> <p><span class="badge badge-info text-white mr-2">254</span> 239 WordSmith Tools Manual 3. Now, in KeyWords , choose New | KW Database . This enables you to choose the whole set of key word files. 245 Note that making a database means that only positive key words will be retained. 254 In the Controller KeyWords settings you can make other choices: minimum frequency for database If you set this to 5 you will only use for the database any KWs which appear in 5 or more texts min. KWs per text If this is set to 10, any KW results files which ended up with very few positive KWs will be ignored. December, 2015. Page 239</p> <p><span class="badge badge-info text-white mr-2">255</span> 240 KeyWords 241 See also: associates . 8.8.2 key key-word definition A "key key-word" is one which is "key" in more than one of a number of related texts. The more texts it is "key" in, the more "key key" it is. This will depend a lot on the topic homogeneity of the corpus being investigated. In a corpus of City news texts, items like , profit , companies are bank key key-words, while computer will not be, though computer might be a key word in a few City news stories about IBM or Microsoft share dealings. Requirements To discover "key key words" you need a lot of text files (say 500 or more), ideally fairly related in their topics, which you make word-lists of (it's much faster doing that in a batch), and then you have to compute key word-lists of each of those, all of which go into a database. It is all explained under 238 creating a keywords database . 236 238 245 See also: How Key Words are Calculated , Definition of Key Word , Creating a Database , 425 Definitions December, 2015. Page 240</p> <p><span class="badge badge-info text-white mr-2">256</span> 241 WordSmith Tools Manual 8.8.3 associates 237 "Associates" is the name given to key-words associated with a key key-word . The point of it... The idea is to identify words which are commonly associated with a key key-word, because they are key words in the same texts as the key key-word is. An example will help. Suppose the word wine is a key key-word in a set of texts, such as the weekend sections of newspaper articles. Some of these articles discuss different wines and their flavours, others concern cooking and refer to using wine in stews or sauces, others discuss the prices of wine in a context of agriculture and diseases affecting vineyards. In this case, the associates of would be items wine like Chardonnay, Chile, sauce, fruit, infected, soil , etc. The listing shows associates in order of frequency. A menu option allows you to re-sort them. Settings You can set a minimum number of text files for the association procedure, in the database settings 254 : Minimum texts These screenshot settings would only process those key-key-words which appear in at least 3 text December, 2015. Page 241</p> <p><span class="badge badge-info text-white mr-2">257</span> 242 KeyWords files. Statistic 289 Choose the mutual information statistic you prefer, apart from Z score which uses a span (here we're using the whole text). Minimum strength This will only show associates which reach at least the strength in the statistic set here, e.g. 3.000. This screenshot shows the most frequent associates in the right-hand column of the main keywords data base window. To see the detailed associates, double-click your chosen term in the KW column or the Associates column: 243 243 See also: definition of associate , related clusters . December, 2015. Page 242</p> <p><span class="badge badge-info text-white mr-2">258</span> 243 WordSmith Tools Manual associate definition 8.8.3.1 An "associate" of key-word X is another key-word (Y) which co-occurs with X in a number of texts. It may or may not co-occur in proximity to key-word X. (A collocate would have to occur within a given distance of it, whereas an associate is "associated" by being key in the same text.) Guardian newspaper text, wine was found to be a key word For example, in a key-word database of 240 key key word in 25 out of 299 stories from the Saturday "tabloid" page, thus a in this section. The top associates of wine were: wines, Tim, Atk in, dry, le, bottle, de, fruit, region, chardonnay, red, . producers, beaujolais It is strikingly close to the early notion of "collocate". Association operates in various ways. It can be strong or weak, and it can be one-way or two-way. to and fro is one-way ( to is nearly always found near fro but it For example, the association between fro near to ). is rare to find 289 425 241 236 See also: Definition of Key Word , Associates , Definitions , Mutual Information keywords database related clusters 8.8.4 The idea is to be able to find any overlapping clusters in a key word database, e.g. where MY LORD is related to MY LORD YOUR SON. To achieve this, choose Compute | Associates . To clear the view, . Compute | Clusters 241 See also: associates 8.8.5 clumps 237 241 "Clumps" is the name given to groups of key-words associated with a key key-word . The point of it (1)... The idea here is to refine associates by grouping together words which are found as key in the same sub-sets of text files. The example used to explain associates will help. wine is a key key-word in a set of texts, such as the weekend sections of Suppose the word newspaper articles. Some of these articles discuss different wines and their flavours, others concern cooking and refer to using wine in stews or sauces, others discuss the prices of wine in a wine would context of agriculture and diseases affecting vineyards. In this case, the associates of be items like Chardonnay, Chile, sauce, fruit, infected, soil , etc. The associates procedure shows all such items unsorted. The clumping procedure, on the other hand, attempts to sort them out according to these different December, 2015. Page 243</p> <p><span class="badge badge-info text-white mr-2">259</span> 244 KeyWords uses. The reasoning is that the key words of each text file give a condensed picture of its "aboutness", and that "aboutnesses" of different texts can be grouped by matching the key word lists. Thus sets of key words can be clumped together according to the degree of overlap in the key word lexis of each text file. Two stages The initial clumping process does no grouping : you will simply see each set of key-words for 244 group clumps , you may simply join those you think belong each text file separately. To together (by dragging), or regroup with help by pressing . The listing shows clumps sorted in alphabetical order. You can re-sort by frequency (the number of times each key word in the clump appeared in all the files which comprise the clump). 244 243 See also: , regrouping clumps definition of associate regrouping clumps 8.8.5.1 How to do it You can simply join by dragging, where you think any two clumps belong together because of semantic similarity between their key-words. will inform you which two clumps match best. You'll see a list of the Or if you press KeyWords , words found only in one, a list of the words found only in the other, and (in the middle) a list of the words which match. It's up to you to judge whether the match is good enough to form a merged clump. If you aren't sure, press . Cancel If you do want to join them, press Join . want to join them and don't want KeyWords to suggest this pair again, If you're sure you don't . You can tell Skip press KeyWords to skip up to 50 pairs. To clear the memory of the items to be . skipped, press Clear Skip The point of it (2)... 417 (1997) shows how clumping reveals the different perceived roles of women in a set of Scott Guardian features articles. 243 clumps See also: KeyWords: advice 8.9 Don't call up a plot of the key words based on more than one text file. It doesn't make sense! 1. Anyway the plot will only show the words in the first text file. If you want to see a plot of a certain 191 . word or phrase in various different files, use Concord dispersion 2. There can be no guarantee that the "key" words are "key" in the sense which you may attach to "key". An "important" word might occur once only in a text. They are merely the words which are outstandingly frequent or infrequent in comparison with the reference corpus. Compare apples with pears, or, better still, Coxes with Granny Smiths. So choose your 3. 237 reference corpus in some principled way . The computer is not intelligent and will try to do whatever comparisons you ask it to, so it's up to you to use human intelligence and avoid comparing apples with phone boxes! December, 2015. Page 244</p> <p><span class="badge badge-info text-white mr-2">260</span> 245 WordSmith Tools Manual If it didn't work... 81 defined for each For the procedure to work, a number of conditions must be right: the language word list must be the same (that is, Mexican Spanish and Iberian Spanish count as the same but Iberian Spanish and Brazilian Portuguese count as different so could not be compared in this 315 process); each word list must have been sorted alphabetically in ascending order before the 315 comparison is made. (The program tries to ensure this, automatically.) Also, any prefixes or suffixes must match. 8.10 KeyWords: calculation The "key words" are calculated by comparing the frequency of each word in the word-list of the text you're interested in with the frequency of the same word in the reference word-list. All words which 120 appear in the smaller list are considered, unless they are in a stop list . If the occurs say, 5% of the time in the small word-list and 6% of the time in the reference corpus, it will not turn out to be "key", though it may well be the most frequent word. If the text concerns the anatomy of spiders, it may well turn out that the names of the researchers, and the items spider, , etc. may be more frequent than they would otherwise be in your reference corpus leg, eight (unless your reference corpus only concerns spiders!) To compute the "key-ness" of an item, the program therefore computes its frequency in the small word-list 297 in the small word-list the number of running words its frequency in the reference corpus 297 the number of running words in the reference corpus and cross-tabulates these. Statistical tests include: the c lassic chi-square test of significance with Yates correction for a 2 X 2 table 417 Lo g Likelihood test, which gives a better estimate of keyness, especially Ted Dunning's when contrasting long texts or a whole genre against your reference corpus. for more on these. See UCREL's log likelihood site A word will get into the listing here if it is unusually frequent (or unusually infrequent) in comparison with what one would expect on the basis of the larger word-list. Unusually infrequent key-words are called "negative key-words" and appear at the very end of your listing, in a different colour. Note that negative key-words will be omitted automatically from a 237 and a plot. keywords database Words which do not occur at all in the reference corpus are treated as if they occurred 5.0e-324 December, 2015. Page 245</p> <p><span class="badge badge-info text-white mr-2">261</span> 246 KeyWords times (0.0000000 and loads more zeroes before a 5) in such a case. This number is so small as not to affect the calculation materially while not crashing the computer's processor. 8.11 KeyWords clusters What is it? A KeyWords cluster, like a WordList cluster, represents two or more words which are found repeatedly near each other. However, a KeyWords cluster only uses key words. A screenshot will help make things clearer. This is a key words list based on a piece of transcript from a Wallace and Gromit film, using the BNC as the reference corpus. The clusters tab below shows us something like this: December, 2015. Page 246</p> <p><span class="badge badge-info text-white mr-2">262</span> 247 WordSmith Tools Manual GROMIT OH GROMIT The frequency 3 in the line means that there are 3 cases where the key-word OH in that text. is found within the current collocation span of means that there is typically one [.] intervening word or [..] two intervening words as in this case shown from the source text. Requirements The procedure is text-oriented. You can only get a keywords cluster list if there is exactly one and source text. Note that for this procedure sentence boundaries are not blocked, so Gromit Ah Oh intervening. can be considered to have one word 251 See also: Plot calculation . 8.12 KeyWords: links The point of it... is to find out which key-words are most closely related to a given key-word. 251 plot will show where each key word occurs in the original file. It also shows how many links A there are between key-words. What are links? December, 2015. Page 247</p> <p><span class="badge badge-info text-white mr-2">263</span> 248 KeyWords Links are "co-occurrences of key-words within a collocational span". An example is much easier to understand, though: elephant is key in a text about Africa, and that water is also a key word in the Suppose the word elephant and water occur within a span of 5 words of each other, they are said to be same text. If "linked". The number of times they are linked like this in the text will be shown in the Links window. The link spans (like collocation horizons) go from 1 word away to up to 25 words to left and right. 113 is 1 to 5 . The default What you see This is a key words list based on Romeo and Juliet, using all the 37 Shakespeare plays as the reference corpus. This Links window shows a number of key words followed by the number of linked types (11 here) the total number of hits of the key word ( ROMEO ) and then the individual linked key words. You can if you wish double-click in the Link ed KWs column and you will see the details listed: December, 2015. Page 248</p> <p><span class="badge badge-info text-white mr-2">264</span> 249 WordSmith Tools Manual has 11 linked words; it's linked 23 times with THOU , 15 times with O ROMEO , etc. A right-click menu lets you copy or print these details. Requirements The procedure is text-oriented. You can only get a keywords links list if there is exactly one source text. 251 plot listing Double-click on any word in the to call up a window which show the linked key- words. 246 116 251 , Source Text , KeyWords clusters See also: Plot calculation 8.13 make a word list from keywords data to save your data as a word list (for later With a key word list on your screen, you can press comparison, etc. using WordList functions). 8.14 plot display The plot will give you useful visual insights into how often and where the different key words crop up 252 in the text. The plot is initially sorted to show which crop up more at the beginning (e.g. in the introduction) and then those from further in the text. The following screenshot shows KWs of the play Romeo and Juliet , revealing where each term Tybalt occurs. The name , for example, occurs in a main burst about half way through the text. December, 2015. Page 249</p> <p><span class="badge badge-info text-white mr-2">265</span> 250 KeyWords re-sorting 252 Click the header to the listing or use the menu option . The Key word column sorts re-sort alphabetically, the dispersion column sorts on the amount of dispersion (higher numbers mean the occurrences are more spread out); the keyness column is the original plot order, or you can sort on number of links with other KWs or on the number of hits found. plot data You can view the plot data as numbers by double-clicking. Here is the view if one double-clicks on the yellow area: The first column gives the word-numbers and the second the percentage of the way through the text. Right-click on this window to copy or print. December, 2015. Page 250</p> <p><span class="badge badge-info text-white mr-2">266</span> 251 WordSmith Tools Manual links 247 links This shows the total number of between the key-word and other key-words in the same 113 = 5,5). That is, how many times was each key- default text, within the current collocation span ( word found within 5 words of left or right of any of the other key-words in your plot. hits This column is here to remind you of how many occurrences there were of each key-word. When you have obtained a plot, you can then see the way certain words relate to others. To do this, look at the Links window in the tabs at the bottom, showing which other key words are most linked 247 to the word you clicked on. That is, which other words occur most often within the collocation horizons you've set. The Links window should help you gain insights into the lexical relations here. Each plot window is dependent on the key words listing from which it was derived. If you close that Save option because the plot comes from a key it. There's no Print down, it will disappear. You can 102 As . There's no save as text option because the Save , or words listing which you should Save plot has graphics, which cannot adequately be represented as text symbols, but you can Copy to 422 the clipboard (Ctrl+C) and then paste it into a word processor as a graphic. Alternatively, use option, which saves your plot data (each word is followed by the total Output | Data as Text File the number of words in the file, then the word number position of each occurrence). 441 The ruler in the menu ( ) allows you to see the plot divided into 8 equal segments if based on one text, or the text-file divisions if there is more than one. 251 446 See also: Key words plot , plot dispersion value 8.14.1 plot calculation The point of it... is to see where the key words are distributed within the text. Do they cluster around the middle or near the beginning of the text? How it's done This will calculate the inter-relationships between all the key words identified so far, excluding any 129 . which you have deleted or zapped 1. it does a concordance on the text finding all occurrences of each key word; 2. it then works out which of each of the other key words appear within the collocation horizons (set in Settings). It uses the larger of the two horizons. 3. it then plots all the words showing where each occurrence comes in the original file (with a "ruler" showing how many words there are in each part of the file). 4. it computes how many other key-words co-occurred with it, within the current collocational span. 446 . 5. it computes a plot dispersion value 430 Note: this process depends on KeyWords being able to find the source texts which your original word-list was based on. 102 and make other graphs, as explained under Save As You may find it useful to export your plot 102 . December, 2015. Page 251</p> <p><span class="badge badge-info text-white mr-2">267</span> 252 KeyWords 247 249 See also: Plot Links , Key words plot display 8.15 re-sorting: KeyWords How to do it... Sorting can be done simply by pressing the top row of any list. Or by pressing F6 or Ctrl+F6. Or by choosing the menu option. Press again to toggle between ascending & descending sorts. the different sorts key words list offers a choice between sorting by A est words appear at the top) (the key-ness k ey (from A to Z) alphabetical order frequency in the smaller list (the most frequent words come first) (the most frequent words come first) frequency in the reference list rotates between sorting by A key words plot k ey (the key-ness est words appear at the top) alphabetical order (from A to Z) frequency (words which appear oftenest come first) number of links (the most linked words come first) first mention of each key word in the text (words used in smallest sections of text come first) range key key words database A toggles between sorting by frequency (the most k ey k ey words appear at the top) alphabetical order (from A to Z) 241 list Associates An toggles between sorting by frequency (association between title-word and item) alphabetical order (from A to Z) frequency (association between item and title-word) 8.16 the key words screen The display shows each key word · · its frequency in the source text(s) which these key words are key in. (Freq. column below) the % that frequency represents · · the number of texts it was present in its frequency in the reference corpus (RC. Freq. column) · · the reference corpus frequency as a % December, 2015. Page 252</p> <p><span class="badge badge-info text-white mr-2">268</span> 253 WordSmith Tools Manual 245 keyness (chi-square or log likelihood statistic · ) 235 · p value 270 (any which have been joined to each other) lemmas · 168 · the user-defined set 245 used. The The calculation of how unusual the frequency is, is based on the statistical procedure statistic appears to the right of the display. If the procedure is log likelihood, or if chi-square is used and the usual conditions for chi-square obtain (expected value >= 5 in all four cells) the probability (p) will be displayed to the right of the chi-square value. The criterion for what counts as "outstanding" is based on the minimum probability value selected before the key words were calculated. The smaller the number, the fewer key words in the display. Usually you'll not want more than about 40 key words to handle. 252 according to how outstanding their frequencies of occurrence are. The words appear sorted Those near the top are outstandingly frequent. At the end of the listing you'll find any which are 236 outstandingly infrequent (negative keywords), in a different colour. There is no upper limit to the keyness column of a set of key words. It is not necessarily sensible to assume that the word with the highest keyness value must be the most outstanding, since keyness is computed merely statistically; there will be cases where several items are obviously equally key (to the human reader) but the one which is found least often in the reference corpus and most often in the text itself will be at the top of the list. Source text 116 (s). As its name suggests, choosing the source text tab gets you to a view of the source text December, 2015. Page 253</p> <p><span class="badge badge-info text-white mr-2">269</span> 254 KeyWords 8.17 WordSmith controller: KeyWords settings 4 marked These are found in the main Controller KeyWords. This is because some of the choices may affect other Tools. KeyWords and WordList both use similar routines: KeyWords to calculate the key words of a text file, and WordList when comparing 260 word-lists . WHAT YOU GET Procedure 245 Chi-square or Log Likelihood. The default is Log Likelihood. See procedure for further details. Max. p value 235 The default level of significance. See p value for more details. December, 2015. Page 254</p> <p><span class="badge badge-info text-white mr-2">270</span> 255 WordSmith Tools Manual Max. wanted (500), Min. frequency (3), Min. % of texts (5% ) You may want to restrict the number of key words (KWs) identified so as to find for example the ten most "key" for each text. The program will identify all the key words, sort them by 236 over key-ness, and then throw away any excess. It will thus favour positive key words negative ones. The minimum frequency is a setting which will help to eliminate any words or clusters which are unusual but infrequent. For example, a proper noun such as the name of a village will usually be extremely infrequent in your reference corpus, and if mentioned only once in the text you're analysing, it is likely not to be "key". The default setting of 3 mentions as a minimum helps reduce spurious hits here. In the case of short texts, less than 600 words long, a minimum of 2 will automatically be used. The minimum percentage of texts (default = 5%) allows you to ignore words which are not found in many texts. Here the percentage is of the text files in the set you are comparing against a reference corpus. If you're comparing a word-list based on one text, each word in it will occur in 100% of the texts and thus won't get ignored. If you compare a word-list based on 200 texts against your reference corpus, the default of 5% would mean that only words which 252 occur in at least 10 of those texts will be considered for keyness. The KeyWords display shows the number of texts each KW was found in. (If you see ?? that is because the data were computed before that facility came into WordSmith.) Exclude negative KWs If this is checked, KeyWords will not compute negative key words (ones which occur in frequently). significantly Minimal processing 246 247 251 or KW clusters If this is checked, KeyWords will not compute plots , links as it computes the key words (they can always be computed later assuming you do not move or delete the original text files). This is useful if computing a lot of KW files in a batch, eg. to make a database. Full lemma processing If this is checked (the default), KeyWords will compute the full frequency in the case of 270 items. For example if alone had a lemmatised WENT, GOES etc. and represents GO GO etc. totalled 100, then its frequency will GO, WENT, GONE frequency of 10 but the whole set would count only 10. GO be counted as 100. If unchecked, Max. link frequency To compute a plot is hard work as all the KWs have to be concordanced so as to work out where they crop up. To compute links between each KW is much harder work again and can take time especially if your KWs include some which occur thousands or hundreds of times in the text. To keep this process more manageable, you can set a default. Here 2000 means that any KW which occurs more than 2000 times in the text will not be used for computing 247 . (It will still appear in the plots and list of KWs, of course.) links WHAT YOU SEE December, 2015. Page 255</p> <p><span class="badge badge-info text-white mr-2">271</span> 256 KeyWords Columns The Columns to show/hide list offers all the standard columns: you may uncheck ones you normally do not wish to see. This will only affect newly computed KeyWords data: earlier data uses the column visibility, size, colours etc already saved. They can be altered using the 87 Layout menu option at any time. DATABASE Database: minimum frequency 237 . The default is 1. See database Database: associate minimum texts 241 The default is 5. See associates . 229 245 See also: KeyWords Help Contents , KeyWords calculation . December, 2015. Page 256</p> <p><span class="badge badge-info text-white mr-2">272</span> WordSmith Tools Manual WordList Section IX</p> <p><span class="badge badge-info text-white mr-2">273</span> 258 WordList 9 WordList 9.1 purpose This program generates word lists based on one or more plain text files. The word lists are automatically generated in both alphabetical and frequency order, and optionally you can generate 276 list too. a word index The point of it... These can be used simply in order to study the type of vocabulary used; 1 448 to identify common word clusters ; 2 to compare the frequency of a word in different text files or across genres; 3 between different to compare the frequencies of cognate words or translation equivalents 4 81 ; languages 234 to get a concordance 5 of one or more of the words in your list. 260 262 Within WordList you can compare two lists or , or carry out consistency analysis (simple 263 detailed ) for stylistic comparison purposes. 229 These word-lists may also be used as input to the KeyWords program, which analyses the words in a given text and compares frequencies with a reference corpus, in order to generate lists of "key-words" and "key-key-words". 278 Word lists don't have to be of single words, they can be of clusters . 318 See also: WordList display Online step-by-step guide showing how 9.2 index Explanations 258 What is WordList and what does it do? 260 Comparing Word-lists 261 Comparison Display 262 Consistency Analysis (Simple) 263 Consistency Analysis (Detailed) 425 Definitions 298 Detailed Statistics 270 Lemmas December, 2015. Page 258</p> <p><span class="badge badge-info text-white mr-2">274</span> 259 WordSmith Tools Manual 437 Limitations 66 Summary Statistics 92 Match List 289 Mutual Information 315 Sort Order 120 Stop Lists 303 Type/token Ratios Procedures 273 Auto-Join 39 Batch Processing 234 Calling up a Concordance 44 Choosing Texts 60 Colours 63 Computing a new variable 431 Folders 72 Editing Entries 113 Editing Filenames 439 Keyboard Shortcuts 101 Exiting 78 Fonts 314 Minimum & Maximum Settings 294 Mutual Information Score Computing 80 Printing 315 Re-sorting a Word List 101 Saving Results 109 Searching for an Entry by Typing 288 Searching for Entry-types using Menu 278 Single Words or Clusters 124 Text Characteristics 276 Word Index 129 Zapping entries 2 318 See also: WordSmith Main Index , WordList display 9.3 compare word lists 9.3.1 compute key words With a word list visible in the WordList tool, you may choose Compute | KeyWords to get a keywords analysis of the current word list. This will assume you will wish to use the reference 447 254 for comparison. corpus defined in the settings You will see the results in one of the tabs at the bottom of the screen. December, 2015. Page 259</p> <p><span class="badge badge-info text-white mr-2">275</span> 260 WordList As in the KeyWords tool, this procedure compares all the words in your original word list with those in the reference corpus but does not inform you about words which are only found in the reference corpus. 260 315 , word-list with tags as prefix See also : Compare two wordlists 9.3.2 comparing wordlists The idea is to help stylistic comparisons. Suppose you're studying several versions of a story, or and another has assassinate , you can use this different translations of it. If one version uses k ill function. all the words in both lists and will report on all those which appear The procedure compares significantly more often in one than the other, including those which appear more than a minimum number of times in one even if they do not appear at all in the other. How to do it 1. Open a word list. File | Compare 2 wordlists . 2. In the menu, choose 3. Choose a word list to compare with. You will see the results in one of the tabs at the bottom of the screen. 4 tab , The minimum frequency (which you can alter in the Controller KeyWords Settings ) can be set to 1. If it is raised to say 3, the comparison will ignore words which do not appear at least 3 times in at least one of the two lists. 235 from 0.1 to 0.000001 or what you will). The Choose the significance value (all, or a p value 235 smaller the p value , the more selective the comparison. In other words, a p setting of 0.1 will show more words than a p setting of 0.0001 will. 229 261 format is similar to that used in KeyWords . You will also find the Dice coefficient The display 29 433 . which compares the vocabularies of the two texts, reported in the Notes 259 92 263 , Match List , Consistency Analysis See also: Compute Key Words December, 2015. Page 260</p> <p><span class="badge badge-info text-white mr-2">276</span> 261 WordSmith Tools Manual 9.3.3 comparison display 260 by choosing compare two wordlists How to get here? Here is a comparison window, where we have compared Shakespeare's King Lear with Romeo and Juliet. The display shows King Lear, (with % if > 0.01%) -- then, to the right frequency in the text you started with, here frequency in the other text, here (with % if > 0.01%) -- then, to the right Romeo & Juliet, 235 245 chi-square or log likelihood , and p value . The criterion for what counts as "outstanding" is based on the minimum probability value entered before the lists were compared. The smaller this probability value the fewer words in the display. The words appear sorted according to how outstanding their frequencies of occurrence are. Those near the top are outstandingly frequent in your main word-list. At the end of the listing you'll find those which are outstandingly infrequent in the first text chosen: in other words, key in the second text. 229 This comparison is similar to the analysis of "key words" in the KeyWords program. The KeyWords analysis is slightly quicker and allows for batch processing. The word is the most key of all, it scores 75 in the keyness column. King At the bottom we see the words of King Lear which are least key in comparison with the play . Romeo and Juliet December, 2015. Page 261</p> <p><span class="badge badge-info text-white mr-2">277</span> 262 WordList 9.4 merging wordlists The point of it You might want to merge 2 word lists (or concordances, mutual information lists etc.) with each other if making each one takes ages or if you are gradually building up a master word list or concordance based on a number of separate genres or text-types. How to do it With one word-list (or concordance) opened, choose File | Merge with and select another. Be aware that... Making a merged word list implies that each set of source texts was different. If you choose to merge 2 word lists both of which contained information about the same text file, WordSmith will do as you ask even though the information about the number of occurrences and of texts in which each word-type was found is (presumably) inaccurate. Merging a list in English with another in Spanish: if you start with the one in Spanish, the one in English will be merged in and henceforth treated as if it were Spanish, eg. in sort order. Presumably if you try to merge one in English with one in Arabic (I've never tried) you should see all the forms but you would get different results merging the Arabic one into the English one (all the Arabic words would be treated as if they were English). 9.5 consistency 9.5.1 consistency analysis (range) This function (termed "range" by Paul Nation ) comes automatically with any word-list. In any word-list you will see a column headed "Texts". This shows the number of texts each word occurred in (the maximum here being the total number of text-files used for the word-list). December, 2015. Page 262</p> <p><span class="badge badge-info text-white mr-2">278</span> 263 WordSmith Tools Manual The point of it... The idea is to find out which words recur consistently in lots of texts of a given genre. For was found to occur in many of a set of business Annual Reports. example, the word consolidate It did not occur very often in each of them, but did occur much more consistently in the business reports than in a mixed set of texts. Naturally, words like the are consistent across nearly all texts in English. (While working on a set of word lists to compare with business reports, I found one text without . I also discovered that the one of my texts was in Italian: but this wasn't the one without the ! The culprit was an election results list, which contained lots of instances of Cons., Lab. and place names, but no instances of the .) the To analyse common grammar words like , a consistency list may be very useful. Even so, you're likely to find some common lexical items recur surprisingly consistently. To eliminate the commonly consistent words and find only those which seem to characterise your genre or sub-genre, you need to find out which are significantly consistent. Save your word list, 260 then use it for comparison with others in WordList, or using KeyWords. This way you can determine which are the significantly consistent words in your genre or sub-genre. 263 260 92 See also: Consistency Analysis (Detailed) , Comparing Word-lists , Match List detailed consistency analysis 9.5.2 262 This function does exactly the same thing as simple consistency , but provides much more detail. The point of it... The idea is to help stylistic comparisons. Suppose you're studying several versions of a story, or different translations of it. This function enables you to see all the words which are used in the word lists which you have called up. The Total column shows how many instances of each word occurred overall, Texts shows how December, 2015. Page 263</p> <p><span class="badge badge-info text-white mr-2">279</span> 264 WordList many text-files it came in. Then there are two columns (No. of Lemmas, and Set which behaves as after occurred in all 37 in a word-list) and then a column for each text. In this case, the word texts, it occurred 393 times in all, and it was most frequent in all's well that ends well at 18 occurrences. Statistics and filenames can be seen for the set of 37 Shakespeare plays used 29 here by clicking on the tabs at the bottom. Notes can be edited and saved along with the detailed consistency list. There is no limit except the limit of available memory as to how many text files you can process in this procedure. You can set a minimum number of texts and a minimum overall frequency in the 311 . WordList settings in the Controller How to do it... New...( ) In the window you see when you press you will be offered a tab showing detailed consistency. To choose more than 1, use Control or Shift as you click. Below I have chosen five out of 6 available. (These are versions of Red Riding Hood.) December, 2015. Page 264</p> <p><span class="badge badge-info text-white mr-2">280</span> 265 WordSmith Tools Manual Initially they may come in the wrong order: December, 2015. Page 265</p> <p><span class="badge badge-info text-white mr-2">281</span> 266 WordList so adjust with the two buttons at the right. and now press compute Detailed Consistency now . Settings 311 You can require a minimum number of texts and minimum frequency in the main Controller if you click this. Sorting Each column can be sorted by clicking on its header column ( etc.). When working Word, Freq. on Shakespeare plays, to get the words which occurred in all 37 to the top, I clicked Texts . December, 2015. Page 266</p> <p><span class="badge badge-info text-white mr-2">282</span> 267 WordSmith Tools Manual Row percentages If you choose to Show as % , you will transform the view so as to get row percentages. In this screenshot, December, 2015. Page 267</p> <p><span class="badge badge-info text-white mr-2">283</span> 268 WordList we see the last few items which appear only in Anthony and Cleopatra, then Cleopatra (93.3%), Egypt (93.18%) etc. (Egypt appears also in A Midsummer Night's Dream, As You Like It, KIng Henry VIII.) 262 268 See also: Detailed Consistency Relations , Comparison , Consistency Analysis (range) 62 261 260 92 , Match List , Column Totals , Comparing Word-lists Display 9.5.2.1 re-sorting: consistency lists The frequency-ordered consistency display can be re-sorted by order (Word) alphabetical frequencies overall (Total, the default) total frequencies in any given file (you see the file names). by the Click on Word, Total or a filename to choose. The sort can be either ascending or descending, the default being descending. 315 See also: Sorting word-lists 9.5.3 detailed consistency relations 263 such as this, of five versions of the fairy story With a detailed consistency list Little Red Riding , Hood detailed ). If you click the red5.lst it looks as if the most long-winded story is probably version 5 ( cons. relation tab you can see the relevant statistics more usefully: December, 2015. Page 268</p> <p><span class="badge badge-info text-white mr-2">284</span> 269 WordSmith Tools Manual where it can be seen that red5 has a type-count of 462 words, more than any other, and that the relation between red2 and red3 is the closest with a relation statistic of 0.487. 433 This relation is the Dice coefficient , based on the joint frequency and the type-counts of the two 426 texts. Type count is the number of different word types in each text. Joint frequency: there are 138 matches in the vocabulary of these two versions, which means that 138 distinct word types book appeared 20 times in one list and 3 times in matched up in the two word lists. (If for example the other, that would count as 1 match.) A Dice coefficient ranges between 0 and 1. The 0.487 can be thought of like a percentage, i.e. there's about a 49% overlap between the vocabularies of the two versions of the same story. 263 . See also : Detailed Consistency 9.6 find filenames If you have an index-based word list on screen you can see how many text files each word was occurs in 7 of found in. For example, in this index based on Shakespeare plays, EYES AND EARS the 37 plays. which of those plays? What if you want to know December, 2015. Page 269</p> <p><span class="badge badge-info text-white mr-2">285</span> 270 WordList Select the word(s) or cluster(s) you're interested in and choose File | Find Files in the menu and you will get something like this: 111 116 276 , making a WordList index See also : source texts , selecting multiple entries 9.7 Lemmas (joining words) what are lemmas and how do we join words? 9.7.1 In a word list, a key word list or a list of collocates you may want to store several entries together: Bringing them together means you're treating them as e.g. want; wants; wanting; wanted. members of the same "lemma" or set -- rather like a headword in a dictionary. A lemmatised head entry has a red mark in the left margin beside it. The others you marked will be coloured as if deleted. The linked entries which have been joined to the head can be seen at the right. 278 we see a word list based on 3-word clusters Here had a where originally a good deal and thereby risen to and a great deal frequency of 24, but has been joined to a good few 141. 273 271 . or manually Joining can be done automatically December, 2015. Page 270</p> <p><span class="badge badge-info text-white mr-2">286</span> 271 WordSmith Tools Manual View all the various lemma forms Double-click on the Lemmas column as in the shot below, and a window of Lemma Forms will open up, showing the various components. Get rid of the deleted words If you don't want to see the deleted words 129 them. choose Ctrl-Z to zap 111 274 273 , See also: Auto-Joining methods , Using a text file to lemmatise , selecting multiple entries 187 Concord lemmatisation manual joining 9.7.2 Manual joining You can simply do this by dragging one entry to another. Suppose your word list has WANT WANTED WANTING December, 2015. Page 271</p> <p><span class="badge badge-info text-white mr-2">287</span> 272 WordList you can simply grab wanting or wanted with your mouse and place it on want . 274 (See choosing lemma file if you want to join these to a word which isn't in the list) Can't see the word to join to? If you cannot see all the items you want to join in one screen, you can do the same thing using by 112 . marking 1. Use Alt+F5 to mark an entry for joining to another. The first one you mark will be the "head". For the moment, while you're still deciding which other entries belong with it, the edge of that row will be marked green. Any entries which you then decide to link with the head (by again pressing Alt+F5) will show they're marked too, in white. (If you change your mind you can press Shift+Alt+F5 and the marking will disappear.) 2. Use F4 to join all the entries which you've marked. The program will then put the joint frequencies 112 marked of all the words you've marked with the frequency of the one you first (the head). Alternatively, 1. select the head word, this makes it visible in the status bar. 2. Find the word you want to join and drag it to the status bar . December, 2015. Page 272</p> <p><span class="badge badge-info text-white mr-2">288</span> 273 WordSmith Tools Manual To Un-join If you select an item which has lemmas visible at the right and press Ctrl+F4, this will unjoin the Edit entries of that one lemma. To unjoin all lemmatised forms in the entire list, in the menu choose . | Join | Unjoin All 9.7.3 auto-joining lemmas There are two methods, a) based on a list, and b) based on a template. a) File-based joining 274 which automates the matching & joining process. The text file You can join up lemmas using a ( ) in actual processing of the list takes place when you choose the menu option Match Lemmas WordList, Concord or KeyWords. Every entry in your lemma list will be checked to see whether it matches one of the entries in your word list. In the example, if, say, am, was , and were are found, be . If go and went they will be stored as lemmas of went will be joined to go . are found, then b) Auto-joining based on a template he menu Or you can auto-join any of the entries in your current word list which meet your criteria: t Auto-Join can be used to specify a string such as S or S;ED;ING and will then go through option S or the whole word list, lemmatising all entries where one word only differs from the next by having or ING on the end of it. (Use ; to separate multiple suffixes.) ED Prefix / Suffix / Infix By default all strings typed in are assumed to be suffixes; to join prefixes put an asterisk ( * ) at the right end of the prefix. If you want to search for infixes (eg. bloody in absobloodylutely [languages like Swahili use infixes a lot]) put an asterisk at each end. Examples and will join book, booked to book to booking to book S;ED;ING books *S;*ED;*ING books to book, booked to book and booking to book will join UN*;ED;ING will join undo to do, booked to book and booking to book *BLOODY* absobloodylutely to absolutely will join The process can be left to run quickly and automatically, or you can have it confirm with you before joining each one. Automatic lemmatisation, like search-and-replace spell-checking, can produce oddities if just left to run! To stop in the middle of auto-joining, press Escape. Tip With a previously saved list, try auto-joining without confirming the changes (or choose Yes to All during it). Then choose the Alphabetical (as opposed to Frequency) version of the list and sort on Lemmas (by pressing the Lemmas column heading). You will see all the joined entries 270 at the top of the list. It may be easier to Unjoin (Ctrl+F4) any mistakes than to confirm each one... Finally, sort on the Word and save. December, 2015. Page 273</p> <p><span class="badge badge-info text-white mr-2">289</span> 274 WordList 270 See also: Lemmatisation 9.7.4 choosing lemma file The point of it... You may choose to lemmatise all items in the current word-list using a standard text file which groups words which belong together ( be -> was, is, were , etc.). While it is time-consuming producing the text file the first time, it will be very useful if you want to lemmatise lots of word lists, 273 and is much less "hit-and-miss" than auto-joining using a template. here is an English-language lemma list from Yasumasa Someya at http://lexically.net/downloads/ T BNC_wordlists/e_lemma.txt . How to do it Lemma list settings are accessed via the Lists option in the WordList menu or an Advanced Settings button in the Controller December, 2015. Page 274</p> <p><span class="badge badge-info text-white mr-2">290</span> 275 WordSmith Tools Manual followed by Choose the appropriate button (for Concord, KeyWords or WordList) and type the file name or browse for it, then Load it. The file should contain a plain text list of lemmas with items like this: BE -> AM, ARE, WAS, WERE, IS GO -> GOES, GOING, GONE, WENT WordSmith then reads the file and displays them (or a sample if the list is long). The format allows any alphabetic or numerical characters in the language the list is for, plus the single apostrophe, that line won't be included space, underscore. In other words, if you mistakenly put GO = GOES because of the = symbol. The actual processing of the list will take place when you compute your word list, key word list or concordance or when you choose the menu option Match Lemmas ( ) in WordList, Concord or 92 KeyWords. See Match List for a more detailed explanation, with screenshots. Lemmatising 120 is processed. occurs before any stop list What if my text files don't contain the headword of the lemma? December, 2015. Page 275</p> <p><span class="badge badge-info text-white mr-2">291</span> 276 WordList AM, ARE BE as in the list above, but your texts don't actually Suppose you are matching etc with BE contain the word BE with zero frequency and add AM, ARE etc . In that case the tool will insert as needed. 92 120 183 270 , Lemmatisation in Concord , Stop List See also: Lemmatisation , Match List 9.8 WordList Index 9.8.1 what is an Index for? the point of it One of the uses for an Index is to record the positions of all the words in your text file, so that 1. you can subsequently see which word came in which part of each text. Another is to speed up access to these words, for example in concordancing. If you select one or more words in the index and press , you get a speedy concordance. 289 Another is to compute "Mutual Information" scores which relate word types to each other. 2. 278 3. Or you can use an index to see word clusters . 12 4. Finally, an index is needed to generate concgram searches. 286 284 276 , find filenames , Exporting index data , Viewing Index Lists See also Making an Index List 12 258 269 , WSConcgram , WordList Help Contents for word clusters 9.8.2 making a WordList Index The process is just like the one for making a word-list except that after choosing your texts and ensuring you like the index filename, you choose the bottom button here: December, 2015. Page 276</p> <p><span class="badge badge-info text-white mr-2">292</span> 277 WordSmith Tools Manual In this screenshot above, the basic filename is shakespeare_plays : WordSmith will add .tokens and .types to this basic filename as it works. Two files are created for each index: file: a large file containing information about the position of every word token in your text .tokens files. .types file: knows the individual word types. will check If you choose an existing basic filename which you have already used, WordList whether you want to add to it or start it afresh: 289 278 and Mutual Information scores for each An index permits the computation of word clusters December, 2015. Page 277</p> <p><span class="badge badge-info text-white mr-2">293</span> 278 WordList word type. The screenshot below shows the progress bars for an index of the BNC corpus; on a modern PC it might work at a rate of about 2.8 million words per minute. The resulting BNC.tokens file was 1.6GB in size and the BNC.types file was 26 MB. adding to an index To add to an existing index, just choose some more texts and choose File | New | Index . If the existing file-name is already in use for an index, you will be asked whether to add more or start it afresh as shown above. 258 284 276 , WordList Help Contents . See also Using Index Lists , Viewing Index Lists 9.8.3 index clusters WordList clusters word list doesn't need to be of single words. You can ask for a word list consisting of two, three, A 276 up to eight words on each line. T o do cluster processing in WordList, first make an index . How to see clusters... 284 . Compute | Clusters Open the index. Now choose December, 2015. Page 278</p> <p><span class="badge badge-info text-white mr-2">294</span> 279 WordSmith Tools Manual Words to make clusters from · "all" : all the clusters involving all words above a certain frequency (this will be s-l-o-w for a big corpus like the BNC ), or · "selection": clusters only for words you've selected (eg. you have highlighted BOOK and BOOKS and you want clusters like ). book a table, in my book To choose words which aren't next to each other, press Control and click in the number at the left -- keep Control held down and click elsewhere. The first one clicked will go green and the others white. In the picture below, using an index of the BNC corpus, I selected world and then life by clicking numbers 164 and 167. December, 2015. Page 279</p> <p><span class="badge badge-info text-white mr-2">295</span> 280 WordList The process will take time. In the case of BNC, the index knows the positions of all of the 100 million words. To find 3-word clusters, in the case above, it took about a minute to world and life and find 5,719 clusters like the world process all the 115,000 cases of and bank . Chris Tribble tells me it took his PC 36 hours to compute all 3- of real life word clusters on the whole BNC ... he was able to use the PC in the meantime but that's not a job you're going to want to do often. What you see The cluster size must be between 2 and 8 words. is the minimum number of each that you want to see. The min. frequency omit # : if selected, this won't show any clusters involving numbers and dates omit phrase frames : see phrase frames section below. Here the user has chosen to see any 3-4-word clusters that appear 5 or more times. Working constraints The "max. frequency %" setting is to speed the process up. in more detail... It means the maximum frequency percentage which the calculation of clusters for a given word will process. This is because there are lots and lots of the very high frequency items and you may well not be interested in clusters which begin with them. For example, the item the is likely to be about 6% of any word-list (about 6 million of them in the BNC therefore), and you might not want clusters starting the... -- if so, you might set the max. percent to 0.5% or 0.1% (which for the BNC corpus will cut out the top 102 frequency words). You December, 2015. Page 280</p> <p><span class="badge badge-info text-white mr-2">296</span> 281 WordSmith Tools Manual will still get clusters which include very high frequency items in the middle or , which a in book a table , but would not get in my book end, like the in . The more words you include, the begins with the very high frequency word longer the process will take... 175 Stop at , like Concord clusters , offers a number of constraints, such as sentence and 188 . The idea is that a 5-word cluster which starts in one other punctuation-marked breaks sentence and continues in the next is not likely to make much sense. is another way of controlling how long the process will take. The Max. seconds per word default (0) means no limit. But if you set this e.g. to 30 then as WordList processes the words in order, as soon as one has taken 30 seconds no further clusters will be collected starting with that word. batch processing allows you to create a whole set of cluster word-lists at one time. Phrase frames phrase-frames These are what William H. Fletcher has defined as , i.e. "groups of wordgrams identical but for a single word", in his kfNgram program. Here, processing 23 Dickens novels shows lots of phrase frames where the wildcard word is represented with *. If you double-click the lemmas column (highlighted here in yellow), you get to see the detail. The process joins all the variants of the phrase in the Lemmas column. In the word list itself they will appear deleted (because they have been joined to another item, the phrase frame). You can un-join December, 2015. Page 281</p> <p><span class="badge badge-info text-white mr-2">297</span> 282 WordList them all if you want ( Edit | Joining | Unjoin or Unjoin all ). Omit phrase frames? option. If you don't want to see phrase frames, select the omit phrase frames Here below, the listing has all his hand sequences together but not drawing his hand across , gave his hand to , etc. as shown in the phrase frame view above. December, 2015. Page 282</p> <p><span class="badge badge-info text-white mr-2">298</span> 283 WordSmith Tools Manual Here is a small set of 3-word clusters involving rabies from the BNC corpus. Some of them are plausible multi-word units. It's a word list Finally, remember this listing is just like a single-word word list. You can save it as a .lst file and open it again at any time, separately from the index. 448 269 See also: find the files for specific clusters , clusters in Concord 9.8.4 join clusters The idea is to group clusters like I DON'T THINK NO I DON'T THINK I DON'T THINK SO I DON'T THINK THAT etc. 270 , either so that the smaller clusters get You can join them up in a process like lemmatisation merged as 'lemmas' of a bigger one, or so that the smaller ones end up as 'lemmas'. A BEARING OF In this screenshot, shorter clusters have been merged with longer ones so that FORTY-FIVE DEGREES relates to several related clusters: December, 2015. Page 283</p> <p><span class="badge badge-info text-white mr-2">299</span> 284 WordList visible by double-clicking the lemmas to show something like this: How to do it Choose Edit | Join | Join Clusters in the WordList menu. The process takes quite a time because each cluster has to be compared with all those in the rest of the list; interrupt it if necessary by 123 pressing Suspend . 9.8.5 index lists: viewing In WordList, open an index as you would any other kind of word-list file -- using File | Open. The Easier, in the .tokens. filename will end Controller | Previous lists , choose any index you've made and double-click it. The index look s exactly like a large word-list. (Underneath, it "knows" a lot more and can do more but it looks the same.) December, 2015. Page 284</p> <p><span class="badge badge-info text-white mr-2">300</span> 285 WordSmith Tools Manual The picture above shows the top 10 words in the BNC Corpus. Number 5 (#) represents numbers or words which contain numbers such as £50.00. These very frequent words are also very consistent -- they appear in at least 99% of the 4,054 texts of BNC . In the view below, you see words sorted by the number of Texts: all these words appeared 10 times in the corpus but their frequencies vary. You can highlight one or more words or mark them with the option, then to get a speedy concordance. 278 But its best use to start with is to generate word clusters like these: December, 2015. Page 285</p> <p><span class="badge badge-info text-white mr-2">301</span> 286 WordList 278 276 258 , WordList Help Contents , WordList clusters See also Making an Index List . 9.8.6 index exporting The point of it... An index file knows the position of every single word in your corpus and it is possible therefore to ask it to supply specific data. For example, the lengths of each sentence or each text in the corpus (in words), or the position of each occurrence of a given word. How to do it With an index open, choose File | Export index data, December, 2015. Page 286</p> <p><span class="badge badge-info text-white mr-2">302</span> 287 WordSmith Tools Manual then complete the form with what you need. Here we have chosen to export the details about the word SHOESTRING in a given index, and to get to see all the sentence lengths (of all sentences in the corpus, not just the ones containing that word). A fragment of the results are shown here: December, 2015. Page 287</p> <p><span class="badge badge-info text-white mr-2">303</span> 288 WordList At the top there are word-lengths of some of the 480 text files, the last of which was 6551 words long; then we see the details of 5 cases of the word SHOESTRING in the corpus, which appeared twice in text AJ0.txt, once in J3W.txt etc.; finally we get the word-lengths of all the sentences in the corpus : the first one only 4 words long. This process will be quite slow if you request a lot of data. If you don't check the sentence lengths you will still get text lengths; it wil be quicker if you leave the word details space empty. 9.9 menu search Using the menu you can search for a sub-string within an entry -- e.g. all words containing *fore* -- the asterisk means that the item can be found in the middle of a word, "fore" (by entering *fore* *fore before but not beforehand , while will find will find them both). These searches can so be repeated. This function enables you to find parts of words so that you can edit your word-list, e.g. by joining two words as one. wildcard. You can search for ends or middles of words by using the * Thus other, something , etc. will find *TH* will find booth, sooth , etc. *TH You can then use to repeat your last search. F8 The search hot keys are: F8 repeat last search (use in conjunction with F10 or F11) F10 search forwards from the current line F11 search backwards from the current line F12 search starting from the beginning 270 This function is handy for lemmatization (joining words which belong under one entry, such as seem/ seems/ seemed/ seeming etc.) December, 2015. Page 288</p> <p><span class="badge badge-info text-white mr-2">304</span> 289 WordSmith Tools Manual 109 See also: searching for an entry by typing 9.10 relationships between words 9.10.1 mutual information and other relations the point of it problem is often found A Mutual Information (MI) score relates one word to another. For example, if , they may have a high mutual information score. Usually, the will be found much more with solve problem than solve often near , so the procedure for calculating Mutual Information takes into account not just the most frequent words found near the word in question, but also whether each the word is often found elsewhere, well away from the word in question. Since is found very often indeed far away from problem , it will not tend to be related, that is, it will get a low MI score. 289 There are several other alternative statistics: you can see examples of how they differ here . and k in , it doesn't distinguish between the virtual k ith This relationship is bi-lateral: in the case of k in near k ith , and the much lower likelihood of finding k ith near k in . certainty of finding There are various different formulae for computing the strength of collocational relationships. The MI in WordSmith ("specific mutual information") is computed using a formula derived from Gaussier, 417 , p. 174; here the probability is based on total corpus Lange and Meunier described in Oakes size in tokens. Other measures of collocational relation are computed too, which you will see 289 explained under Mutual Information Display . Settings 4 Controller Main Settings | Advanced | Index under The Relationships settings are found in the 310 WordList . or in a menu option in 294 289 See also: Mutual Information Display , Making an Index List , Computing Mutual Information 276 284 258 , WordList Help Contents . , Viewing Index Lists 417 for further information about Mutual Information, Dice, MI3 etc. Oakes See relationships display 9.10.2 433 : The Relationships procedure contains a number of columns and uses various formulae December, 2015. Page 289</p> <p><span class="badge badge-info text-white mr-2">305</span> 290 WordList Word 1 : the first word in a pair, followed by Freq. (its frequency in the whole index). Word 2 : the other word in that pair, followed by Freq. (its frequency in the whole index). If you have 295 ", then Word 1 precedes Word 2. computed "to right only : the number of texts this pair was found in (there were 23 in the whole index). Texts : the most typical distance between Word 1 and Word 2. Gap 294 : their joint frequency over the entire span Joint (not just the joint frequency at the typical gap distance). In line 7 of this display, BACKWARDS occurs 83 times in the whole index (based on Dickens FORWARDS 8 times. They occur together 62 times. The gap is 2 because novels), and , in these data, typically comes 2 words away from . The pair backwards backwards forwards 295 setting comes in 17 texts. (This search was computed using the to right only * forwards mentioned above). As usual, the data can be sorted by clicking on the headers. Let's now sort by clicking on "Z score" first and "Word 1" second. You get a double sort, main and secondary, because sometimes you will want to see how MI or Z score or other sorting affects the whole list and sometimes you will want to keep the words sorted alphabetically and only sort by MI or Z score within each word-type. Press Swap to switch the primary & secondary sorts. December, 2015. Page 290</p> <p><span class="badge badge-info text-white mr-2">306</span> 291 WordSmith Tools Manual The order is not quite the same ... but not very different either. Both Freq. columns have fairly small numbers. 417 Here is the display sorted by MI3 Score (Oakes p. 172): December, 2015. Page 291</p> <p><span class="badge badge-info text-white mr-2">307</span> 292 WordList Much more frequent items have jumped to the top. 417 Now, by Log Likelihood (Dunning , 1993): Here the Word 2 items are again very high frequency ones and we get at colligation (grammatical collocation). A T Score listing is fairly similar: December, 2015. Page 292</p> <p><span class="badge badge-info text-white mr-2">308</span> 293 WordSmith Tools Manual but a Dice score ordered list brings us back to results akin to the first two shown above: 289 433 294 , Computing Relationships , Mutual Information and other relationships See also: Formulae , 258 284 276 , Viewing Index Lists , WordList Help Contents . Making an Index List 417 for further information about the various statistics offered. See Oakes December, 2015. Page 293</p> <p><span class="badge badge-info text-white mr-2">309</span> 294 WordList 9.10.3 relationships computing 276 WordList Index To compute these relationship statistics you need a . Then in its menu, choose Relationships. Compute | words to process You can choose whether to compute the statistics for all entries, or only any selected (highlighted) entries, or only those between two initial characters e.g. between A and D, or indeed to use your own specified words only. 111 to select only a few items for MI calculation, you can mark them first (with ). Or If you wish 262 always do part of the list (eg. A to D) and later merge you can your mutual-information list with another (E to H). Alternatively you may choose to use only items from a plain text file constructed using the same syntax as a match-list file., or to use all items except ones from your plain text file. omissions omit any containing # , and omit if word1=word2 is there because you might will cut out numbers December, 2015. Page 294</p> <p><span class="badge badge-info text-white mr-2">310</span> 295 WordSmith Tools Manual GOOD GOOD if there are lots of cases where these 2 are found near each other. find that is related to show pairs both ways allows you to locate all the pairs more easily because it doubles up the list. HEAVEN and EARTH For example, suppose we have a pair of words such as . This will normally enter the list only in one order, let us say HEAVEN as word 1 and EARTH as word 2. If you're looking at all the words in the Word 1 column, you will not find EARTH . If you want to be able to see . Here we can the pair as both HEAVEN - EARTH and EARTH - HEAVEN , select show pairs both ways and WITH DUST see this with to right only : if this is checked, possible relations are computed to the right of the node only. That WITH DUST , say, cases of WITH to the right will be noticed but cases where is, when considering is to the left of DUST would get ignored. Here, the number of texts goes down to 5 from 9, MI score is lower, etc, because the process looks only to the right. (In the case of a right-to-left language like Arabic, the processing is still of the words following the node word.) 297 recompute tok en count allows you to get the number of tokens counted again e.g. after items have been edited or deleted. December, 2015. Page 295</p> <p><span class="badge badge-info text-white mr-2">311</span> 296 WordList min. and max max. frequency percent : ignores any tokens which are more frequent than the percentage indicated. Set the maximum frequency, for example, to 0.5% to cut out words whose frequency is greater than that.(The point of this is to avoid computing mutual information for words like the and of , which are likely to have a frequency greater than say 1.0%. For example 0.5%, in the case of the BNC, would mean ignoring about 20 of the top frequency words, GET, BACK, such as WITH, HE, YOU . 0.1% would cut about 100 words including . If you want to include all words, then set this to 100.000) BECAUSE min. frequency : the minimum frequency for any item to be considered for the calculation. (Default = 5; a minimum frequency of 5 means that no word of frequency 4 or less in the index will be visible in the relationship results. If an item occurs only once or twice, the relationship is unlikely to be informative.) stop at allows you to ignore potential relationships e.g. across sentence boundaries. It has to do 188 with whether breaks such as punctuation or sentence breaks determine that one word cannot be related to another. With stop at sentence break, " I wrote the letter. December, 2015. Page 296</p> <p><span class="badge badge-info text-white mr-2">312</span> 297 WordSmith Tools Manual Then I posted it " would not consider posted as a possible collocate of letter because there's a sentence break between them. span : the number of intervening words between collocate and node. With a span of 5, the node wrote would consider the, letter, then, I and posted as possible collocates if stop at no limits in the example above. were set at min. texts : the minimum number of texts any item must be found in to be considered for the calculation. min. Dice/mutual info.MI3 etc: the minimum number which the MI or other selected statistic must come up with to be reported. A useful limit for MI is 3.0. Below this, the linkage between node and collocate is likely to be rather tenuous. Choose whether ALL the values set here are used when deciding whether to show a possible relationship or ANY. (Each threshold can be set between -9999.0 and 9999.0.) Computing the MI score for each and every entry in an index takes a long time: some years ago it took over an hour to compute MI for all words beginning with B in the case of the BNC edition (written, 90 million words) in the screenshot below, using default settings. It might take 24 hours to process the whole BNC, 100 million words, even on a modern powerful PC. Don't forget to save your results afterwards! 289 179 289 See also Collocates , Mutual Information Settings , Mutual Information Display , Detailed 276 268 284 , Viewing Index Lists , Making an Index List Consistency Relations , Recompute Token 258 297 . Count , WordList Help Contents 9.11 recompute tokens Why recompute the tokens? 245 289 we need an estimate of the or Keyness To compute relations such as Mutual Information total number of running words (let's call it TNR) in the text corpus from which the data came. It is December, 2015. Page 297</p> <p><span class="badge badge-info text-white mr-2">313</span> 298 WordList tricky to decide what actually counts as the TNR. Not only are there problems to do with 125 125 125 in the middle of a word, numbers , words , apostrophes and other non-letters hyphenation 120 cut out because of a stoplist etc, but also a decision whether TNR should in principle include all of those or in principle include only the words or clusters now in the list in question. In practice for single-word word lists this usually makes little difference. In the case of word clusters, however, there might be a big difference between the TNR words and TNR clusters, and anyway what exactly 448 ? is meant by running clusters of words if you think about how they are computed 426 For most normal purposes, the total number of running words (tokens ) computed when the word list or index was created will be used for these statistical calculations. How to do it Compute | Tok ens What it affects Any decision made here will apply equally both to the node and the collocate whether these are clusters or single words, or to the little word-list and the reference corpus word-list in the case of key words calculations. If you do choose to recompute the token count, then the TNR will be calculated as the total of the word or cluster frequencies for those entries still left in the list. After any have been zapped or if a minimum frequency above 1 is used the difference may be quite large. not If you choose to recompute, the total number of running words (tokens) computed when the word list or index was created will be used. statistics 9.12 statistics 9.12.1 window: Visible by clicking the Statistics tab at the bottom of a WordList December, 2015. Page 298</p> <p><span class="badge badge-info text-white mr-2">314</span> 299 WordSmith Tools Manual Overall results take up the top row. Details for the individual text files follow below. Statistics include: number of files involved in the word-list file size (in bytes, i.e. characters) December, 2015. Page 299</p> <p><span class="badge badge-info text-white mr-2">315</span> 300 WordList running words in the text ( tokens ) 120 or changes to minimum settings tokens used in the list (would be affected by using a stoplist 314 ) Compute | Tok ens sum of entries: choose to see, otherwise this will be blank no. of different words ( types ) 303 type/token ratios 147 in the text no. of sentences mean sentence length (in words) standard deviation of sentence length (in words) 147 in the text no. of paragraphs mean paragraph length (in words) standard deviation of paragraph length (in words) 146 no. of headings in the text (none here because WordSmith didn't know how to recognise headings) mean heading length (in words) 147 no. of sections in the text (here 480 because WordSmith only noticed 1 section per text) mean section length (in words) standard deviation of heading length (in words) 125 numbers removed 120 stoplist tokens and types removed the number of 1-letter words ... e number of n-letter words (to see these scroll the grid horizontally) th 113 (14 is the default maximum word length. But you can set it to any length up to 50 letters in Word List Settings, in the Settings menu.) Longer words are cut short but this is indicated with a + at the end of the word. ou have The number of types (different words) is computed separately for each text. Therefore if y done a single word-list involving more than one text, summing the number of types for each text will not give the same total as the number of types over the whole collection. Vertical layout If you prefer the layout in previous versions of WordSmith, you can choose to save the statistics vertically in a text file. December, 2015. Page 300</p> <p><span class="badge badge-info text-white mr-2">316</span> 301 WordSmith Tools Manual This lets you choose which ones (any unchecked are zero in the data): and the data will be saved listed vertically. Alternatively you could export the data here to Excel and use its Transpose function to get the rows and columns swapped. December, 2015. Page 301</p> <p><span class="badge badge-info text-white mr-2">317</span> 302 WordList Tokens used for word list In these data, there were over 2.8 million running words of text, but 38,943 numbers were not listed separately, so the number of tokens in the word-list is a little under 2.8 million. MS Word's word count is different! 125 , The number of tokens found is affected by your settings such as treatment of numbers 125 435 and mid-word letter settings (e.g. the apostrophe). For that reason you may hyphens well find that different programs give different values for the same text. (Besides, in the case of MS Word we are not told how a "word" is defined...) can be computed after the word-list is created by choosing Compute | Tok ens Sum of entries of each entry frequencies and will show the total number of tokens now available by adding the (you may have deleted some). 66 318 See also : WordList display (with a screenshot), Summary Statistics , Starts and Ends of Text 146 297 , Recomputing tokens Segments . December, 2015. Page 302</p> <p><span class="badge badge-info text-white mr-2">318</span> 303 WordSmith Tools Manual 9.12.2 type/token ratios 426 ". But a lot of these words will be If a text is 1,000 words long, it is said to have 1,000 "tokens 426 repeated, and there may be only say 400 different words in the text. "Types ", therefore, are the different words. The ratio between types and tokens in this example would be 40%. But this type/token ratio (TTR) varies very widely in accordance with the length of the text -- or corpus of texts -- which is being studied. A 1,000 word article might have a TTR of 40%; a shorter one might reach 70%; 4 million words will probably give a type/token ratio of about 2%, and so on. Such type/token information is rather meaningless in most cases, though it is supplied in a WordList statistics display. The conventional TTR is informative, of course, if you're dealing with a corpus comprising lots of equal-sized text segments (e.g. the LOB and Brown corpora). But in the real world, especially if your research focus is the text as opposed to the language, you will probably be dealing with texts of different lengths and the conventional TTR will not help you much. WordList offers a better strategy as well: the (STTR) is computed standardised type/tok en ratio 113 words as Wordlist goes through each text file. By default , n = 1,000. In other words the n every ratio is calculated for the first 1,000 running words, then calculated afresh for the next 1,000, and so on to the end of your text or corpus. A running average is computed, which means that you get an average type/token ratio based on consecutive 1,000-word chunks of text. (Texts with less than 1,000 words (or whatever n is set to) will get a standardised type/token ratio of 0.) Setting the N boundary 314 Adjust the n number in Minimum & Maximum Settings to any number between 100 and 20,000. What STTR actually counts 270 as a word (so and Note: The ratio is computed a) counting every different form say are two says 120 c) those which are within the length types) b) using only the words which are not in a stop-list 446 435 into account. and hyphens you have specified, d) taking your preferences about numbers The number shown is a percentage of new types for every n tokens. That way you can compare 417 type/token ratios across texts of differing lengths. This method contrasts with that of Tuldava (1995:131-50) who relies on a notion of 3 stages of accumulation. The WordSmith method of computing STTR was my own invention but parallels one of the methods devised by the mathematician David Malvern working with Brian Richards (University of Reading). Further discussion TTR and STTR are both pretty crude measures even if they are often assumed to imply something ELEPHANT, about "lexical density". Suppose you had a text which spent 1,000 words discussing , etc., then 1,000 discussing MADONNA, ELVIS etc, and then 1,000 discussing LION, TIGER CLOUD, RAIN, SUNSHINE . If you set the STTR boundary at 1,000 and happened to get say 48% or so for each section, the statistic in itself would not tell you there was a change involving Africa, Music, Weather. Suppose the boundary between Africa & Music came at word 650 instead of at word 1,000, I guess there'd be little or no difference in the statistic. But what make a would difference? A text which discussed clouds and written by a person who distinguished a lot between This would be higher types of cloud might also use MIST, FOG, CUMULUS, CUMULO-NIMBUS. HIGH, but used adjectives like CLOUD in STTR than one written by a child who kept referring to DARK, to describe the clouds... and who repeated LOW, HEAVY, DARK, THIN, VERY THIN , etc a lot in describing them... THIN December, 2015. Page 303</p> <p><span class="badge badge-info text-white mr-2">319</span> 304 WordList (NB. Sh akespeare is well known to have used a rather limited vocabulary in terms of measures like these!) summary statistics 9.12.3 A word list's statistics give you data about the corpus, but you may need more specific information about individual words in a word list too. How many end in -ly? Press Count to get something like this: December, 2015. Page 304</p> <p><span class="badge badge-info text-white mr-2">320</span> 305 WordSmith Tools Manual There is no limit on the searches: Cumulative Column A cumulative count adds up scores on another column of data apart from the one you are processing for your search. The columns in this window are for numerical data only. Select one and ensure activated is ticked. December, 2015. Page 305</p> <p><span class="badge badge-info text-white mr-2">321</span> 306 WordList In this example, a word-list was computed and a search was made of 4 common word endings (and one ridiculous one). For -LY there are 2,084 types, with a total of 41,886 tokens in this corpus. - ITY and -NESS are found at the ends of fairly similar numbers of word-types, but -ITY has many more tokens in these data. Breakdown 215 See the example for Concord Load button 66 see the explanation for count data frequencies . 9.13 stop-lists and match-lists 120 In WordList, a stop list is used in order to filter out some words, usually high-frequency words, 92 is to be able to compare all that you want excluded from your word-list. The idea of a match-list the words in your word list with another list in a plain text file and then do one of a variety of December, 2015. Page 306</p> <p><span class="badge badge-info text-white mr-2">322</span> 307 WordSmith Tools Manual operations such as deleting the words which match, deleting those which don't, or just marking the ones in the list. For both, you can define your own lists and save them in plain text files. Settings are accessed via the WordList menu or by an Advanced Settings button in the Controller 270 120 See also: lemma lists , general explanation of stop-lists 9.14 import words from text list the point of it You might want a word list based on some data you have obtained in the form of a list, but whose original texts you do not have access to. requirements 81 (select this before you make the list), and can be in Your text file can be in any language Unicode or ASCII or ANSI, plain text. <Tab> characters are expected to separate the columns of data. Decimal points and commas will be ignored. Words will have leading or trailing spaces trimmed off. The words do not need to be in frequency or alphabetical order. You need at least a column with words and another with a number representing each word's frequency. example ; My word list for test purposes. THIS 67,543 December, 2015. Page 307</p> <p><span class="badge badge-info text-white mr-2">323</span> 308 WordList IT 33,218 WILL 2,978 BE 5,679 COMPLETE 45 AND 99,345 UTTER 54 RUBBISH 99 IS 55,678 THE 678,965 You should get results like these. Statistics are calculated in the simplest possible way: the word-lengths (plus mean and standard deviation), and the number of types and tokens. Most procedures need to know the total number of running words (tokens) and the number of different word types so you should manage to use the word-list in KeyWords etc. The total is computed by adding the frequencies of each word-type ( 67543+33218+2978 etc. in the example above). Optionally, a line can start \TOTAL=\ and contain a numerical total, eg. \TOTAL=\ 299981 In this case the total number of tokens will be assumed to be 299981, instead. how to do it When you choose the New menu option ( ) in WordList you get a window offering three tabs: a tab for most usual purposes, Main December, 2015. Page 308</p> <p><span class="badge badge-info text-white mr-2">324</span> 309 WordSmith Tools Manual 263 one for , and another ( Advanced ) for creating a word list using a plain text Detailed Consistency file. Set the word column and frequency column appropriately according to the tabs in each line. (Column 1 assumes that the word comes first before any tabs; in the case of CREA's Spanish word-list there is a column for ranking so the word and frequency columns would need to be 2 and 3 respectively.) Choose your .txt file(s) and a suitable folder to save to, add any notes you wish, and press create word list(s) now . 9.15 settings Enter topic text here. December, 2015. Page 309</p> <p><span class="badge badge-info text-white mr-2">325</span> 310 WordList 9.15.1 WordSmith controller: Index settings Index File The filename is for a default index which you wish to consider the usual one to open. thorough concordancing : when you compute a concordance from an index, you will either get ( thorough checked) or not get (if not checked) full sentence, paragraph and other 166 as in a normal concordance search. (Computing these statistics takes a statistics little longer.) show if frequency at least : determines which items you will see when you load up the index file. (What you see looks like a word list but it is reading the underlying index.) Clusters the minimum and maximum sizes are 2 and 8. Set these before you compute a multi-word word 278 list based on the index. A good maximum is probably 5 or 6. stop at: you can choose where you want cluster breaks to be assumed. With the setting above (no limits), " I wrote the letter. Then I posted it " would consider letter as a possible multi-word string even though there's a sentence break then I posted 188 between them. Relationships 294 See relationships computing . December, 2015. Page 310</p> <p><span class="badge badge-info text-white mr-2">326</span> 311 WordSmith Tools Manual 9.15.2 WordSmith controller: WordList settings 4 marked . These are found in the main Controller WordList 314 This is because some of the choices -- e.g. Minimum & Maximum Settings -- may affect other Tools. What you Get and What you See . There are 2 sets : WHAT YOU GET Word Length & Frequencies 314 See Minimum & Maximum Settings . Standardised Type/Token # 303 See WordList Type/Token Information . December, 2015. Page 311</p> <p><span class="badge badge-info text-white mr-2">327</span> 312 WordList Detailed Consistency = a total frequency for a word to be included in the Detailed Min. frequency overall 263 list. Consistency = the minimum number of texts that word must appear in. Min. texts WHAT YOU SEE Tags By default you get "words only, no tags". If you want to include tags in a word list, you need 141 first. Then choose one of the options here. to set up a Tag File BECAUSE <w CJS> or In the example here we see that is classified by the BNC either as a a . (That's how the BNC classifies BECAUSE OF ...) <w PRP> December, 2015. Page 312</p> <p><span class="badge badge-info text-white mr-2">328</span> 313 WordSmith Tools Manual 315 For colours and tags see WordList and Tags . Columns The Columns to show/hide list offers all the standard columns: you may uncheck ones you normally do not wish to see. This will only affect newly computed KeyWords data: earlier data uses the column visibility, size, colours etc already saved. They can be altered using the 87 Layout menu option at any time. Case Sensitivity Normally, you'll make a case-insensitive word list. If you wish to make a word list which 314 the , and THE , activate case sensitivity . distinguishes between The Lemma Visibility By default in a word-list you'll see the frequency of the headword plus the associated forms; show headword frequency only box, the frequency column will ignore the if you check the associated wordform frequencies. Similarly, if you check omit headword from lemma column you will see only the associated forms there. December, 2015. Page 313</p> <p><span class="badge badge-info text-white mr-2">329</span> 314 WordList 284 258 276 , WordList and , WordList Help Contents , Viewing Index Lists See also: Using Index Lists 315 278 . tags , Computing word list clusters 9.15.3 minimum & maximum settings These include: minimum word length Default: 1 letter. When making a word-list, you can specify a minimum word length, e.g. so as to cut out all words of less than 3 letters. maximum word length Default: 49 letters. You can allow for words of up to 50 characters in length. If a word exceeds the limit and Abbreviate with + is checked, WordList will append a + symbol at the end of it to show that it was cut short. (If Abbreviate with + is not checked, the long word will be omitted from your word list. You might wish to use this to set both minimum and maximum to say, 4, and leave Abbreviate with + un-checked – that way you'll get a word-list with only the 4-letter words in it. minimum frequency Default: 1. By default, all words will be stored, even those which occur once only. If you want only the more frequent words, set this to any number up to 32,000. maximum frequency Default maximum is 2,147,483,647 (2 Gigabytes). You'd have to analyse a lot of text to get a word which occurred as frequently as that!. You might set this to say 500, and the minimum to 50: that way your word-list would hold only the moderately common words. type/token mean number (default 1,000) Enables a smoothed calculation of type/token ratio for word lists. Choose a number between 10 and 303 20,000. For a more complete explanation, see WordList Type/Token Information . 124 120 113 See also: Text Characteristics , Setting Defaults , Stop Lists 9.15.4 case sensitivity Normally, you'll make a case-insensitive word list, especially as in most languages capital letters are used not only to distinguish proper nouns but also to signal beginnings of sentences, headings, etc. If, however, you wish to make a word list which distinguishes between major, Major 4 ). in the Controller WordList Settings | Case Sensitivity and MAJOR, activate case sensitivity ( When you first see your case-sensitive list, it is likely to appear all in UPPER CASE. Press Ctrl+L 87 ) to change this. menu option ( Layout or choose the December, 2015. Page 314</p> <p><span class="badge badge-info text-white mr-2">330</span> 315 WordSmith Tools Manual 9.16 sorting How to do it... Sorting can be done simply by pressing the top row of any list. Press again to toggle between ascending & descending sorts. With a word-list on your screen, the main Frequency window doesn't sort, but you can re-sort the Alphabetical window (look at the tabs at the bottom of WordList to choose the tab) in a number of different ways. The menu offers various options. Alphabetical Word Sort Many languages have their own special sorting order, so prior to sorting or re-sorting, check that you 81 for the words being sorted. Spanish, for example, uses this have selected the right language order: A,B,C,CH,D,E,F,G,H,I,J,K,L,LL,M,N,Ñ,O,P,Q,R,S,T,U,V,W,X,Y,Z. KeyWords and other comparisons require an alphabetically-ordered list in ascending order. If you get problems, please open the word lists in WordList, choose the "alphabetical" tab, sort by pressing the "Word" header until the sort is definitely alphabetical ascending, then choose the Save menu option. Reverse Word Sort (Ctrl+F6) This is so that you can sort words by suffix. The order is determined by word endings, not word -ing forms together. beginnings. You will therefore find all the Word Length Sort (Shift+Ctrl+F6) This is so that you can sort words by their length (1-letter, 2-letter, etc up to 50-letter words) Within a set of equal-length words, there's a second, alphabetical sort. Consistency Sort 263 Press the "Texts" header to re-sort the words according to their consistency . 208 252 72 419 , Editing entries ; See also: Concord sort ; Accented characters , KeyWords sort 81 Choosing Language 9.17 WordList and tags 311 to load it, you can get a word-list If you have defined a tag file and made the appropriate settings which treats tags and words separately as in this example, where the tag is viewed as if it were a prefix. A word list only of tags? WordList settings | What you see Choose whether you want only the tags, only the words or both in | Tags : December, 2015. Page 315</p> <p><span class="badge badge-info text-white mr-2">331</span> 316 WordList In its Alphabetical view, the list can be sorted on the tag or the word. To colour these as in the example, in the main Controller I chose Blue for the foreground for tags (as the default is a light grey). December, 2015. Page 316</p> <p><span class="badge badge-info text-white mr-2">332</span> 317 WordSmith Tools Manual Then in WordList, I chose View | Layout as in this screenshot, selected the Word column header and chose green below. December, 2015. Page 317</p> <p><span class="badge badge-info text-white mr-2">333</span> 318 WordList 9.18 WordList display Each WordList display shows · the word · its frequency · its frequency as a percent of the running words in the text(s) the word list was made from · the number of texts each word appeared in · that number as a percentage of the whole corpus of texts The Frequency display might look like this: Here you see the top 7 word-types in a word list based on 480 texts. There are 72,028 occurrences 426 of these words (tokens ) altogether but in the screenshot we can only see the first few. The Freq. column shows how often each word cropped up ( THE look s as if it appeared 72,010 times in the 480 texts), and the % column tells us that the frequency represents 6.07% of the running words in those texts. The Texts column shows that THE comes in 480 texts, that is 100% of the texts used for the word list. If we pull the Freq. column a little wider (cursor at the header edge then pull right) so that the 72,010 doesn't have any purple marks beside it, December, 2015. Page 318</p> <p><span class="badge badge-info text-white mr-2">334</span> 319 WordSmith Tools Manual we see the true frequency value is actually 172,010. Another thing to note is that there seems to be a word #, with over 50 thousand occurrences. . That represents a number or any word with a number in it such as EX658 The Alphabetical listing also shows us some of the words but now they're in alphabetical order. ABANDON comes 43 times altogether, and in 37 of the 480 texts (less than 8%). ABANDONED , on the other hand, not only comes more often (78 times) but also in more texts (14% of them). Now let's examine the statistics. December, 2015. Page 319</p> <p><span class="badge badge-info text-white mr-2">335</span> 320 WordList In all 480 texts, there are 72,028 word types (as pointed out above). The total running words is 2,833,815. Each word is about 4.57 characters in length. There are 107,073 sentences altogether, , there are only 1,571 different word types on average 26.47 words in length. In the text of a00.txt and that interview is under 7,000 words in length. This is explained in more detail in the Statistics 298 page. Finally, here is a screenshot of the same word list sorted "reverse alphabetically". In the part which we can see, all the words end in -IC . December, 2015. Page 320</p> <p><span class="badge badge-info text-white mr-2">336</span> 321 WordSmith Tools Manual To do a reverse alphabetical sort, I had the Alphabetical window visible, then chose Edit | Other in the menu. To revert to an ordinary alphabetical sort, press F6. sorts | Reverse Word sort 262 270 See also : Consistency , Lemmatisation December, 2015. Page 321</p> <p><span class="badge badge-info text-white mr-2">337</span> WordSmith Tools Manual Utility Programs Section X</p> <p><span class="badge badge-info text-white mr-2">338</span> 323 WordSmith Tools Manual 10 Utility Programs Besides the three main programs, there are more Tools that have arisen over the years; this Chapter explains them. Character Profiler lists characters used in your texts 405 7 like WordList but for sequences of characters CharGrams to find anomalous texts Corpus Corruption 7 Detector 8 File Utilities various utilities for managing files 8 File Viewer shows the innards of your text files 8 Minimal Pairs identifies similar words 354 prepares your corpora for different formats Text Converter shows translated texts Viewer and Aligner 379 12 finds and shows concgrams WSConcGram 10.1 Convert Data from Previous Versions 10.1.1 Convert Data from Previous Versions As WordSmith Tools develops, it has become necessary to store more data along with any given 81 word-list, concordance etc. For example, data about which language (s) were selected for a 29 now stored with every type of results file, etc. Therefore it has been concordance, notes necessary to supply a tool to convert data from the formats used in WS 1.0 to 3.0 (last millennium) to the new format for the current version. This is the Data Converting tool. If you try to open a file made with a previous version you should be offered a chance to convert it first. Note: as WordSmith develops, its saved data may get more complex in format. A concordance saved by WordSmith 5.0 cannot be guaranteed to be readable by WordSmith 4.0 for that reason, and a 6.0 one may require version 6.0, etc. December, 2015. Page 323</p> <p><span class="badge badge-info text-white mr-2">339</span> 324 Utility Programs 10.2 WebGetter 10.2.1 overview The point of it The idea is to build up your own corpus of texts, by downloading web pages with the help of a search engine. What you do Just type a word or phrase, check the language, and press Download . How it works WebGetter visits the search engine you specify and downloads the first 1000 sources or so. Basically it uses the search engine just as you do yourself, getting a list of useful references. Then it sends out a robot to visit each web address and download the web page in each case (not from the search engine's cache but from the original web-site). Quite a few robots may be out there searching for you at once -- the advantage of this is that one slow download doesn't hold all the others up. After downloading a web page, that WebGetter robot checks it meets your requirements (in Settings 325 ) and cleans up the resulting text. If the page is big enough, a file with a name very similar to the web address will be saved to your hard disk. When it runs out of references, re-visits the search engine and gets some more. WebGetter 326 325 328 , Limitations , Display See also: Settings December, 2015. Page 324</p> <p><span class="badge badge-info text-white mr-2">340</span> 325 WordSmith Tools Manual 10.2.2 settings Language Choose the language you require from the drop-down list. Search Engine The search engine box allows you to choose for example www.google.com.br for searches on Brazilian Portuguese or www.google.fr for French. That is a better guarantee of getting text in the language you require! Folder and Time-out December, 2015. Page 325</p> <p><span class="badge badge-info text-white mr-2">341</span> 326 Utility Programs where the texts are to be stored. By defaults it is the \wsmith5 · folder stemming from your c:\temp . The folder you specify will act as a root. That is, if you specify My Documents and search for "besteirol", results will be stored in . If you do another c:\temp\besteirol search on say "WordSmith Tools", results for that will go into c:\temp\WordSmith Tools . · WebGetter robot stops trying a given webpage if timeout: the number of seconds after which there's no response. Suggested value: 50 seconds. Requirements · minimum file length (suggested 20Kbytes): the minimum size for each text file downloaded from the web. Small ones may just contain links to a couple of pictures and nothing much else. goes through the · minimum words (suggested: 300): after each download, WebGetter downloaded text file counting the number of words and won't save unless there are enough. required words: you may optionally type in some words which you require to be present in · each download; you can insist they all be present or any 1 of these. Clean-up If you want all the HTML markup removed, you can check this box, setting a suitable span between < and > markers, 1000 recommended. Advanced Options If you work in an environment with a "Proxy Server", WebGetter will recognise this automatically and use the proxy unless you uncheck the relevant box. If in doubt ask your network administrator. You can specify the whole search URL and terms string yourself if you like with a box in the Advanced options. 328 326 , Limitations See also: Display display 10.2.3 As WebGetter works, it shows the URLs visited. If greyed out, they were too small to be of use or haven't been contacted yet. There is a tab giving access to a list of the successfully downloaded files which will show something like this. December, 2015. Page 326</p> <p><span class="badge badge-info text-white mr-2">342</span> 327 WordSmith Tools Manual Double-click a file to view and, if you like, edit it in Notepad. The URLS list looks like this December, 2015. Page 327</p> <p><span class="badge badge-info text-white mr-2">343</span> 328 Utility Programs Just double-click an URL to view it in your browser. 325 328 , Limitations See also: Settings limitations 10.2.4 Everything depends on the search engine and the search terms you use. The Internet is a huge noticeboard; lots of stuff on it is merely ads and catalogue prices etc. The search terms are collected by the search engines by examining terms inserted by the web page author. There is no guarantee that the web pages are really "about" the term you specify, though they should be roughly related in some way. 325 Use the Settings to be demanding about what you download, e.g. in requiring certain words or phrases to be present. 326 See also: Display December, 2015. Page 328</p> <p><span class="badge badge-info text-white mr-2">344</span> 329 WordSmith Tools Manual 10.3 Corpus Corruption Detector 10.3.1 Aim The purpose is to check whether one or more of your text files in your corpus doesn't belong. This could be because · it has got corrupted so what used to be good text is now just random characters or has got cut much shorter because of disk problems · it isn't even in the same language as the rest of the corpus The tool works in any language. It does it by using a known sample of good text (in whatever language) and comparing that good text with all your corpus. 329 See also : How to do it 10.3.2 How it works 1. Choose a set of "known good text files" which you're sure of. The program uses these to evaluate the others. When you click the button for known good text files, you can choose a number. You might choose 20 good ones so as to get a lot of information about what your corpus is like. 2. Choose your corpus head folder and check the "include sub-folders" box if your corpus spreads over that folder and sub-folders. 466 in it, eg. 3. The program will anyway look out for oddities such as a text file which has holes where the system thinks it's 1000 characters long but there are only 700. 4. If you check the "digraph check" box it will additionally check that the pairs of letters (digraphs) are of roughly the right frequency in each text file. For example there should be a lot of TH combinations if your text is in English, and no QF combinations. If you are working with a corpus in Portuguese and your text files are in Portuguese too, of course the digraphs will be different, and TH won't be frequent. The program ignores punctuation. December, 2015. Page 329</p> <p><span class="badge badge-info text-white mr-2">345</span> 330 Utility Programs 5. If you are doing a digraph check you can vary certain parameters such as how much variation there may be between the frequencies of the digraphs (a sensible setting for "frequency variation per 1000" could be 30 (in other words 3%)), and "percent fail allowed" (which might be set at say 25 -- this means that up to 25% of the digraph pairs may be out of balance before an alert is sounded). 6. Press Start. You will see the progress bar moving forward. If you see a file-name in the top-left box, a click on it will indicate why it was found questionable. Double-clicking it will open up the text in the window below so you can examine it carefully. Filenames of possibly corrupted texts are yellow if the basic check fails, and cream-coloured if the reason is because of a diagraph mis-match. In the screenshot, PEN000884.txt is problematic because the file-size on disk is 2591 (there should be 2591 characters) but there are only 158, as shown in the statusbar at the bottom. In the case of PEOP020151.txt, the text appears below (after double-clicking the list), December, 2015. Page 330</p> <p><span class="badge badge-info text-white mr-2">346</span> 331 WordSmith Tools Manual and the status bar says the tool has found an imbalance in the digraphs. The text itself has a lot of blank space at the top but otherwise looks OK (it is supposed to be in Spanish) but the detector has flagged it up as possibly defective. 10.4 Minimal Pairs 10.4.1 aim A program for finding possible typos and pairs of words which are minimally different from each other (minimal pairs). For example, you may have a word list which contains ALEADY 5 and ALREADY 461, that is, your texts contain 5 instances where there is a possible misprint and 461 which are correct. This program helps to find possible variants and typos and anagrams. 332 333 336 337 , choosing your files , output , rules and settings , running the See also : requirements 338 program . December, 2015. Page 331</p> <p><span class="badge badge-info text-white mr-2">347</span> 332 Utility Programs 10.4.2 requirements A word-list in text format. Each line should contain a word and its frequency separated by tabs, e.g. or 6 You can make such a list using WordList . For example, select (highlight) the columns containing the word and its frequency and copy to the clipboard, then paste into Notepad, or save as TXT (without numbers or heading row): December, 2015. Page 332</p> <p><span class="badge badge-info text-white mr-2">348</span> 333 WordSmith Tools Manual giving this: 338 336 333 331 337 , running the program See also : aim , choosing your files , output , rules and settings . choosing your files 10.4.3 Choose your input word list (which must be in plain text format) by clicking the button at the right of the edit space and finding the word list .txt file. December, 2015. Page 333</p> <p><span class="badge badge-info text-white mr-2">349</span> 334 Utility Programs 337 Type in an appropriate file-name for your results. Choose the rules too. When you're ready, press Compute button. the You'll then be asked to choose the columns and rows (allowing you to skip header lines or the number column if your txt file has those). December, 2015. Page 334</p> <p><span class="badge badge-info text-white mr-2">350</span> 335 WordSmith Tools Manual Here, the first three lines are greyed out, so we need to alter the Rows box: 338 336 332 331 337 , running the program . , output , requirements See also : aim , rules and settings December, 2015. Page 335</p> <p><span class="badge badge-info text-white mr-2">351</span> 336 Utility Programs output 10.4.4 An example of output is 418 ALTHOUGHT (7) ALTHOUGH(37975) Here the lines are numbered, and the bracketed numbers mean that ALTHOUGHT occurred 7 times and ALTHOUGH 37,975 times. An example using Dutch medical text, lower case: aplasie (1) 136 aplasia(1)[L] 137 (1) apyogeen(1)[S] apyogene 138 (1) arachnoidales(1)[I] arachnoideales Here line 136 generated a 1-Letter difference, 137 a Swap and 138 an Insertion. An example using Guardian newspaper, looking for anagrams: ADIEU(43)[A] 35 AUDIE (7) 36 ASSAB(16)[A] ABASS (6) 37 AGUIAR (6) AURIGA(11)[A] ALRED'S (6) ADLER'S(18)[A] 38 ANDOR (6) 39 ADORN(128)[A] an example where the alternatives are separated with commas but the rule and frequencies are not shown. 337 333 331 332 See also : aim , requirements , choosing your files , rules and settings , running the 338 . program December, 2015. Page 336</p> <p><span class="badge badge-info text-white mr-2">352</span> 337 WordSmith Tools Manual 10.4.5 rules and settings Rules Insertions (abxcd v. abcd) This rule looks for 1 extra letter which may be inserted, e.g. HOWWEVER Swapped letters (abcd v. acbd) This rule looks for letters which have got swapped, e.g. HOVEWER (abcd v. abxd) 1 letter difference This rule looks for a 1 letter difference, e.g. HOWEXER (abcd v. adbc) Anagrams too This rule looks for the same letters in a different order, e.g. HWVROEE Settings: end letters to ignore if at last letter: This rule allows you to specify any letters to ignore if at the end of the word, e.g. if you specify "s", the possibility of a typo when comparing ELEPHANT and ELEPHANTS will not be reported. minimum word length This setting specifies the minimum word length for the program to consider the possibility there is a typo. The default is 5, which means 4-letter words will be simply ignored. This is to speed up processing, and because most typos probably occur in longer words. letters to ignore at start of word December, 2015. Page 337</p> <p><span class="badge badge-info text-white mr-2">353</span> 338 Utility Programs This setting (default =1) allows you to assume that when looking for minimal pairs there is a part of each at the beginning which matches perfectly. For example, when considering ALEADY, the program probably doesn't need to look beyond words beginning with A for minimal pairs. If the setting is 1, it will not find BLEADY as a minimal pair. To check all words, take the setting down to 0. The program will be 26 times slower as a result! only words starting with ... If you choose this option, the program will ignore the next setting (max. word frequency). Here you can type in a sequence such as F,G,H and if so, the program will take all words beginning F or G or H (whatever their frequency) and look for minimal pairs based on the rules and settings above. max. word frequency (ignored if "all words starting with" is checked) How frequent can a typo be? This will depend on how much text your word-list is based on. The default is 10, which means that any word which appears 11 times is assumed to be OK, not a typo. Factory Defaults (restores default values) 338 336 333 332 331 , running the program See also : aim , requirements , choosing your files , output . running the program 10.4.6 Press "Compute". · You should then see your source text, with a few lines visible. Some of the rows and columns may be greyed and others white: move the column and row numbers till the real data are white and any headings or line-numbers are greyed out. December, 2015. Page 338</p> <p><span class="badge badge-info text-white mr-2">354</span> 339 WordSmith Tools Manual Here the first three lines are greyed out, and that can be fixed by changing Rows from 4 to 1. Once you press OK the program starts: December, 2015. Page 339</p> <p><span class="badge badge-info text-white mr-2">355</span> 340 Utility Programs If you want to stop in the middle, press "Stop". You can press "Results" to see your results file, when you have finished. 337 333 332 331 336 , rules and settings , output See also : aim , requirements , choosing your files File Viewer 10.5 10.5.1 Using File Viewer Aim To help you examine files of various kinds to see what is in them. This might be in order to see whether they’re really in plain text format · to see whether there's something wrong with them, such as unusual characters which oughtn't · to be there · to see whether they were formatted for Windows, Mac, or for Unix December, 2015. Page 340</p> <p><span class="badge badge-info text-white mr-2">356</span> 341 WordSmith Tools Manual 444 · to check out any hidden stuff in there. (A for example will have lots of hidden Word .doc stuff you don’t see on the screen but is inside the file anyway, such as the name of the author, type of printer being used, etc.) · to find strings of words in a database, a spreadsheet or even a program file. · to get certain selected characters picked out in an easy-to-find colour Here you can see the gory details of the text. Some characters are highlighted in different colours so you can see exactly how the text is formatted. Loading a text file Choose your file – if necessary click on the button at the right of the text-input box. Press Show . Characters The two options available are as 1 bytes or 2 to represent each character-symbol in the text in question. You may need to alter this setting to see your text in a readable format. The two windows The left window shows how the "text" is built up. You can see each character as a number and, further to the right, as a character. The right window shows the text, paragraph by paragraph, word-wrapped so you can read it. Searching Just type in the search-word and press Search. The search is case sensitive and is not a "whole word" search. Synch Press the Synch button to synchronise the two displays. The display you clicked last is the "boss" for synchronising. Settings December, 2015. Page 341</p> <p><span class="badge badge-info text-white mr-2">357</span> 342 Utility Programs Colours The colour grids let you see the number section in special colours, so you can find the potential problems you’re interested in. · First select the character you want coloured. · Click the foreground or background colour list change the colour. The character names are Unicode names. In the picture the symbol with the 003E code (>) is the last one clicked. Font Choose the font and size in the font window. You may need to change font if you want to see Chinese etc. represented correctly. Columns o You can set the “hex” columns between 2 and 16. o You can see the numbers at the left of the main window in hex or decimal. 10.6 File Utilities index 10.6.1 December, 2015. Page 342</p> <p><span class="badge badge-info text-white mr-2">358</span> 343 WordSmith Tools Manual This sub-program supplies a few file utilities for general use: 347 Compare Two Files 348 File Chunker 348 Find Duplicates 349 Rename 466 " in text files Find Holes: for "holes 343 Splitter 346 Joiner Move files to sub-folder 10.6.2 Splitter 10.6.2.1 Splitter: index Explanations 343 What is the Splitter utility and what's it for? 344 Filenames 345 Wildcards 2 See also : WordSmith Main Index 10.6.2.2 aim of Splitter needs to know: Splitter This is a sub-program for splitting large files into lots of small ones. Start/End of Section Separator [FF] </ or <end of story> or The symbol which will act as a start or end-of-text separator: eg. !# [FF?????] Text> or or CHAPTER # [FF*] or or Restrictions: 1 The start/end-of-text marker must occur at the beginning of a line in the original large file. 2 It is case sensitive: </Text> will not find </text> . 345 such as or ? . 3 The first character in the separator may not be a wildcard #,* may occur only once each in the separator. 4 * # and will create a new file every time it encounters the start/end-of-text marker you've specified. Splitter The end of text box determines whether the line containing the separator gets included in the previous or new text file. Destination Folder Where you want the small files to be copied to. (You'll need write permission to access it if on a network.) Required sizes The minimum and maximum number of lines that your small files can have (default = 5 and December, 2015. Page 343</p> <p><span class="badge badge-info text-white mr-2">359</span> 344 Utility Programs 30,000). Only files within these limits will be saved. This feature is useful for extracting files from very large CD-ROM files. A "line" means from on e <Enter> to the next. Bracket first line Whether or not you want the first line of each new text file to be bracketed inside < > marks. (If your separator is a start-of-section separator like CHAPTER with a number, you may wish that to be in brackets. And often the first line after an end-of-text symbol will contain some kind of header.) If you don't want it to insert < and > around the line, leave this box unchecked. Title Line If you know that a given line of your texts always contains the title for the sub-text in question, set this counter to that number, otherwise leave it at 0. For example, where you know that every line immediately following <end of story> has a title for the next story, you could put 1. Example : ... <end of story> Visiting New York ... The file-name created for each story will contain the title as well as a suitable number. In this . example a file-name might end up as C:\texts\split\Visiting New York 0004.txt 355 441 345 344 346 , The buttons , Text Converter index . Se e also: Joiner , Filenames , Wildcards Splitter: filenames 10.6.2.3 Splitter will create lots of small files based on your large one(s). December, 2015. Page 344</p> <p><span class="badge badge-info text-white mr-2">360</span> 345 WordSmith Tools Manual It creates filenames as sub-files of a folder based on the name of each text file. In this screenshot, it has found a file called and is C:\temp\G_O\The Observer\2002\Home_news\Apr 07.txt creating a set of results listed 1 to 11 or more, using the specified destination folder plus the same folder structure as the original texts. Each sub-text is numbered 0001.txt, 0002.txt etc. Sub-folders are created if there are too many files for a folder. If a title is detected, each file will contain the title plus a number and .txt. If there is no title, the filename will be the number + .txt added as a file extension. Tips 1. Splitter will start numbering at 1 each session. . Note that the small files will probably take up a lot more room than the original large file did. 2 This is because the disk operating system has a fixed minimum file size. A one-character tex t file will require this minimum size, which will probably be several thousand bytes in size. Even so, I suggest you keep your text files such that each file is a separate text, by using Splitter. When 39 doing word lists and key words lists, though, do them in batches . 3. CD-ROM files when copied to your hard disk may be read-only. You can change this 355 . attribute using Text Converter Splitter: wildcards 10.6.2.4 The hash symbol, , is used as a wildcard to represent any number , so [FF#] would find [FF3] # # [FF9987] but not [FF] or [FF 9] or [FFhello] . (because there's a space in it) or * string , so [FF* The asterisk represents any would find all of the above. * is used as the last character in the end-of-text symbol. It would find [FF anything at all up to the next <Enter> . December, 2015. Page 345</p> <p><span class="badge badge-info text-white mr-2">361</span> 346 Utility Programs The ^ mark represents any single , so [FF^^] would find [FFZQ] but none of the others. ^ letter The question mark represents any single character (including spaces, punctuation, letters), so ? would find [FF??] in the above examples, but none of the others. [FF 9] #,^,? To represent a genuine * , put each one in double quotes, eg. "?" "#" "^" "*" . or 28 343 , Wildcards See also: Settings join text files 10.6.3 This is a sub-program for joining small text files into bigger ones. You might want this because you aren't interested in the different texts individually but are only interested in studying the patterns of a whole lot of texts. When you choose Joiner you will see something like this: End of text marker [FF] or <end of story> or </Text> The symbol which will act as an end-of-text separator: eg. !# or or or [FF?????]. The end-of-text marker will come at the beginning of a line in the original [FF*] large file. If it includes # this will be replaced by the number of the text as the texts are processed. Folder with files to join Where the small files you want to be merged are now. They will not get deleted -- you must merge them into the Destination folder. December, 2015. Page 346</p> <p><span class="badge badge-info text-white mr-2">362</span> 347 WordSmith Tools Manual and sub-folders too Check this if you want to process sub-folders of the "folder with files to join". file specifications or *.txt or The kinds of text files you want to merge, eg. . *.* *.txt;*.ctx Destination Folder Where you want the small files to be copied and merged to. (You'll need write permission to access it if on a network.) recreate same sub-folders as source If checked, creates the same structure as in the source. In the example, all the sub-folders of will be created below d:\text\guardian_joined . d:\text\guardian_cleaned one text for each folderful if checked, a whole folderful of source texts will go into one text file in the destination. Max. size (Kbytes) The maximum size in kilobytes that you want the each merged text file to be. 1000 means you will get almost 1 megabyte of text into each. That is about 150,000 words if there are no tags isn't checked. and the text is in English. This only applies if one text for each folderful Stop button Does what it says on the caption. 343 355 Se e also: Splitter , Text Converter index . 10.6.4 compare two files The point of it The idea is to be able to check whether 2 files are similar or not. You may often make copies of files 348 and a few weeks later cannot remember what they were. Or you have used File Chunker to copy a big file to floppies and want to be sure the copy is identical to the original. This program checks whether a) they are the same size b) they have the same contents (it goes through both, byte by byte, checking whether they match) c) they have the same attributes (file attributes can be "read only" [you cannot alter the file], "system" [a file which Windows thinks is central to your operating system], "hidden" [one which is so important that Bill Gates may be reluctant to even let you know it exists on your disk]) d) they have the same time & date. December, 2015. Page 347</p> <p><span class="badge badge-info text-white mr-2">363</span> 348 Utility Programs How to do it Specify your 2 files and simply press "Compare". 348 348 349 , rename See also : file chunker , find duplicates 10.6.5 file chunker The point of it The idea is to be able to cut up a big file into pieces, so that you can copy it in chunks e.g. for emailing. Naturally, you may later want to restore the chunks to one file. How to do it: to copy a file Specify your "file to chunk" (the big one you want to copy) 1. 2. Specify your "drive & folder" (where you want to copy the chunks to. Specify the "size of each chunk" 3. 4. Specify whether to "compress while chunking" (compresses the file as it goes along) 5. Press "Copy". How to do it: to restore a file 1. Specify your "first chunk" (the first chunk you made using this program) 2. Specify which folder to "restore to" (where you want the results) 3. Specify whether to "delete chunks afterwards" (if they are not needed) 4. Press "Restore". 349 348 347 See also : compare two files , find duplicates , rename find duplicates 10.6.6 The point of it The idea is to be able to check whether you have files with the same name in different folders. You may often make copies of files and a few weeks later cannot remember where they were. By default this program only checks whether the files it is comparing have the same name but dates and file-size can be compared too. It handles lots of folders, the point being to locate unnecessarily duplicated files or confusing reuse of the same filenames. How to do it Specify your Folder 1 and simply press "Search". Find Duplicates will go through that folder and any sub-folders and will report any duplicates found. December, 2015. Page 348</p> <p><span class="badge badge-info text-white mr-2">364</span> 349 WordSmith Tools Manual Or you can specify 2 different folders (e.g. on different drives) and the process compares one set with the other. Sub-folders to exclude Useful if there are some sub-folders you know you're not interested in. In the example below, any folder whose name ends will be ignored as or _shibs or whose name is demo or examples _old will any sub-folder below it. In the window below, you will find all the duplicates listed with the folder and date. In the example we can see there are two files called ambassador 1.txt in different shakespeare folders. 349 348 347 See also : compare two files , file chunker , rename 10.6.7 rename The point of it To rename a lot of files at once, in one or more folders. You may have files with excessively long names which do not suit certain applications. Or it is a pain to rename a lot of files one by one. December, 2015. Page 349</p> <p><span class="badge badge-info text-white mr-2">365</span> 350 Utility Programs The idea is to rename a set of files with a standard name plus a number. For example suppose you have downloaded a lot of text files containing emails from the Enron scandal, you could rename them Enron001.txt Enron002.txt etc. How to do it Specify your Folder, whether sub-folders will also be processed, and the kinds of file you want to find for renaming. files and all In the screenshot, *.txt;*.xml has been specified, which means all .txt .xml files. has been pressed, too. In the list you can see some of each. Find Files If you typed baby??.doll you'd get all files with the .doll ending as long as the first 4 characters were baby as in baby05.doll, babyyz.doll , etc. Now specify a "mask for new name" and a starting number. The mask can end with a series of # characters standing for numbers. In this screenshot, there are 4 # symbols December, 2015. Page 350</p> <p><span class="badge badge-info text-white mr-2">366</span> 351 WordSmith Tools Manual so after pressing Rename the texts have been renamed Bacon plus an incrementing number formatted to 4 digits. 348 347 348 , find duplicates , file chunker See also : compare two files move files to sub-folders 10.6.8 This function allows you to take a whole set of files in a folder and move them to suitable sub- folders. Example: you have In c:\temp 2001 Jan.txt 2001 Feb.txt 2003 Jan.txt 2003 Feb.txt 2003 March.txt 2003 Oct.txt etc. and you want them sorted by year into different folders. Using the template you will take the first four characters of your files and place each into a AAAA* sub-folder named appropriately. Results contains 2001 Jan.txt, 2001 Feb.txt c:\temp\2001 December, 2015. Page 351</p> <p><span class="badge badge-info text-white mr-2">367</span> 352 Utility Programs and all the others are in c:\temp\2003 Syntax ? = ignore this character A = use this character in the file-name * = use no further characters in the file-name 10.6.9 dates and times Purpose 126 The aim here is to parse your file-names identifying suitable textual file dates and times , where you have incorporated something suitable in the file-name. Suitable dates can be re-used by saving 50 . file-choices as favourites Mask Syntax Folder to process (and optionally its sub-folders) and The procedure reads any file-names in the attempts to parse them. If an indicator is found it will record a suitable date combination. Suitable indicators of textual date are YY or YYYY year, two or four digits (YY=a 20th Century date) December, 2015. Page 352</p> <p><span class="badge badge-info text-white mr-2">368</span> 353 WordSmith Tools Manual MM month DD day * skip all characters until a digit is found The procedure doesn't understand words such as "December" or "Five", it only uses digits. Any character other than Y,M,D,* in the mask simply gets ignored. Output The program will always add each entry found to a simple text file ( File for list of dates ) listing its 48 file-name and adding a suitable date as expected in the auto-date procedure <no date , (or found> if the mask didn't match a valid date). In addition, where the result is 1st January 1980 or later, it will set the file's time and date in the operating system to the date as parsed, so that WordSmith will automatically match the date of the text contents to the date stored on disk. When all files have been processed, the program opens the list of files in Notepad or equivalent. Use 48 it afterwards in the auto-date procedure within file-choosing and save your preferred text files as 50 favourites . Examples Your Mask Date and Time interpreted Source file 20060512 Peter YYYYMMDD 12th May 2006 (first 8 characters used in the mask) monologue.txt YYMMDD 841231.txt 31st December 1984 (20th Century assumed if YY mask used) DDMMYY 311284.txt 31st December 1984 DDMMYYYY 20060512 Peter 20th June, the year 512 AD monologue.txt 20060512 Peter DDMM 20th June of the current year monologue.txt ######YYYYMM Peter 20060512.txt 12th May 2006 (first six characters were ignored, five DD for Peter, one for space) Peter 20060512.txt *YYYY 15 July 2006 (all characters to first digit skipped, then next 4 used for year date) YYYY 1086 Domesday 15 July 1086 (there were only four digits) book.txt YYYYMMDD 1086 Domesday 15 July 1086 (mask had 8 digits but file-name only 4) book.txt YYYY#MM#DD 2006,05/12,10-54.txt 12th May 2006 YYYY MM DD 2006,05/12,10-54.txt 12th May 2006 December, 2015. Page 353</p> <p><span class="badge badge-info text-white mr-2">369</span> 354 Utility Programs 10.6.10 find holes in texts After text files have been copied from one source to another, they may get slightly corrupted with 466 in the stream of text. This utility lets you seek out the texts in your corpus which have got holes corrupted in this way and optionally lets you delete them. If you want to convert the holes to space- characters, use the Text Converter. 10.7 Text Converter 10.7.1 purpose This program does a "Search & Replace", on virtually any number of files. bers of texts and re-formatting them as you prefer, e.g. It is very useful for going through large num taking out unnecessary spaces, ensuring only paragraphs have <Enter> at their ends, changing accented characters, ensuring you have Windows symbols, etc. £ converting text For a simple search-and-replace you can type in the search item and a replacement; for more 362 complex conversions, us e a Conversion File so that Text Converter knows which symbols or 419 , bu t strings to convert. It operates under Windows and saves using the Windo ws character set will convert text using DOS or Windows character sets. You can use it to make your text files suitable for use with your Internet browser. It does a "search and replace" much as in word-processors, but it can do this on lots of text files, any number of strings, not just one. one after the other. As it does so, it can also replace up to 355 Once the conve speci fied, the Text Converter will read each rsion file is prepared and Settings source file and either create a new version or replace the old one, depending on the over-write 355 . setting You will be able to see the details of ho w many instances of each string were found and replaced overall. filtering files 360 And/or you may need to make sure texts which meet certain criteria are put into the right folders . Tip The easiest way to ensure your text files are the way you want, especially if you have a very large number to convert, is to copy a few into a temporary folder and try out your conversion file with the Text Converter. You may find you've failed to specify some necessary conversions. Once you're sure everything is the way you want it, delete the temporary files. December, 2015. Page 354</p> <p><span class="badge badge-info text-white mr-2">370</span> 355 WordSmith Tools Manual 441 355 , The buttons Text Converter Contents See also: 10.7.2 index Explanations 354 What is the Text Converter and what's it for? 355 Getting Started... 365 Convert the text format 360 Filters 364 Sample Conversion File 363 Syntax 362 Conversion File 2 See also : WordSmith Main Index settings 10.7.3 Files 1. Choose (the top left tab). Decide whether you want the program to process sub-folders of the one you choose. There is no limit to the number of files Text Converter can process in one operation. 360 tab, and: Conversion 2. Click on the or Filters 3. Decide whether you want to make copies of the text files, or to over-write the originals. Obviously you must be confident of the changes to choose to over-write; copying however may mean a problem of storage space. Choose between "Within files", "Whole files" or "Extract from files" Within files = make some alterations to specific words in each text file, if found For example, specify what to convert, that is the search-words and what you want them to be replaced with. For a quick conversion you can simply type in a word you want to change and its responsable replacement (e.g. Just one change so that becomes responsible ) or you can 362 Conversion File choose your own pre-prepared . Whole files = make some alterations affecting all the words in each text file 365 E.g. in the Whole Files section you can choose simply to update legacy files in various ways, e.g. by choosing Dos to Windows, Unix to Windows, MS Word .doc to .txt, into Unicode, etc . Or if you want simply to extract some text from your files, you should choose the Extract from files December, 2015. Page 355</p> <p><span class="badge badge-info text-white mr-2">371</span> 356 Utility Programs 359 tab. If you might want some files not to be converted, or simply don't want any conversions but instead 360 to place files in appropriate sub-folders, choose the Filters tab at the top. If you choose Over-write Source texts, Text Converter will work more quickly and use less disk space, but of course you should be quite sure your conversion file codes are right before starting! 358 for details of how the folders get replicated in a copy operation. See copy to Note that some space on your hard disk will be used even if you plan to over-write . The conversion process does its work, then if all is well the original file is deleted, and the new version copied. There has to be enough room in the destination folder for the largest of your new files; it is much quicker for it to be on the same drive as the source texts. If it isn't, your permission will be asked to use the same drive. December, 2015. Page 356</p> <p><span class="badge badge-info text-white mr-2">372</span> 357 WordSmith Tools Manual inserting <Tab>, <Enter> etc December, 2015. Page 357</p> <p><span class="badge badge-info text-white mr-2">373</span> 358 Utility Programs Choose in the listbox and drag to one of the windows to left or right of ->. The string inserted will 362 . conform to the format cutting out a header from each file It can be useful to get a header removed. In the screenshot example, any text which contains </ will get all the beginning of the file up to that point cut out. teiHeader> OK to start; you will see a list of results, as in the screenshot below. Press If you want to stop Text Converter at any time, click on the Stop button or press Escape. Right-click to see the source or the converted result file: 355 . See also: Text Converter Contents Text Converter: copy to 10.7.3.1 If you choose to copy the files you are converting, instead of converting or filtering them in place, which is a lot safer, the new files created will be structured like this. d:\texts\2007\literature c:\temp Suppose you are processing and copying to and suppose d:\texts\2007\literature contains this sort of thing: d:\texts\2007\literature\shakespeare\hamlet.pdf d:\texts\2007\literature\shakespeare\macbeth.pdf ... d:\texts\2007\literature\shakespeare\poetry\sonnet1.pdf d:\texts\2007\literature\shakespeare\poetry\sonnet2.pdf ... d:\texts\2007\literature\french\victor hugo\miserables.pdf d:\texts\2007\literature\french\poetry\baudelaire\le chat.pdf ... you will get c:\temp\shakespeare\hamlet.txt c:\temp\shakespeare\macbeth.txt ... December, 2015. Page 358</p> <p><span class="badge badge-info text-white mr-2">374</span> 359 WordSmith Tools Manual c:\temp\shakespeare\poetry\sonnet1.txt c:\temp\shakespeare\poetry\sonnet2.txt ... c:\temp\french\victor hugo\miserables.txt c:\temp\french\poetry\baudelaire\le chat.txt ... In other words, for each file successfully converted or filtered, any same directory structure beyond d:\texts\2007\literature the starting point ( in the example above) will get appended to the destination. 10.7.4 extracting from files The point of it... The idea is to be able to extract something useful from within larger files. In the example below, I wanted to extract the headlines only from some newspaper text. I knew that the header for each , and I text contained <DAT> (date of publication mark-up) and that the headline ended </HED> wanted only those chunks which contained the phrase . Leading article: The results I got looked like this: <CHUNK "1"><DAT>05 August 2001</DAT> <SOU>The Observer</SOU> <PAG>26</PAG> <HED>Comment: Leading article: Ealing's lessons: Time for steel from the peacemakers</HED></CHUNK> <CHUNK "2"><DAT>05 August 2001</DAT> <SOU>The Observer</SOU> <PAG>26</PAG> <HED>Comment: Leading article: The free market can't house us all: Why Government has to intervene</HED></CHUNK> December, 2015. Page 359</p> <p><span class="badge badge-info text-white mr-2">375</span> 360 Utility Programs <CHUNK "3"><DAT>05 August 2001</DAT> <SOU>The Observer</SOU> <PAG>26</PAG> <HED>Comment: Leading article: What a turn-on: Cat's whiskers are the bee's knees</HED></CHUNK> Settings containing : all non-blank lines in this box will be required. Leave it blank if you have no requirement that the chunk you want to extract contains any given word or phrase. chunk mark er : Leave blank, otherwise each chunk will be marked up as in the example above, if 343 it begins with . The reason for this marker is to enable subsequent splitting and ends with > < . 10.7.5 filtering: move if This function allows you to specify a word or phrase, look for it in each file, and if it's found move that file into a new folder. The point of it ... Suppose you have a whole set of files some of which contain dialogues between Pip and Magwich, others containing references to the Great Wall of China or the anatomy of fleas. You want those with the Pip-Magwich dialogues and you want them to go into a folder called . Expectations How to do it 1. Click on the tab (at the top). Filters 2. Now the Activated checkbox. December, 2015. Page 360</p> <p><span class="badge badge-info text-white mr-2">376</span> 361 WordSmith Tools Manual 3. Specify a word or phrase the text must contain. This is case sensitive. In this case Magwich has been specified. 4. Choose whether that word or phrase has to be found · anywhere in the text, · anywhere before some other word or phrase, or · between 2 different words or phrases. 5. Decide what happens if the conditions are met: · nothing, i.e. ignore that text file · copy to a certain folder, or · move to that folder, or · delete the file (careful!). Action options You can also decide to build sub-folder(s) based on the word(s) or phrase(s) you chose in #3. · (The idea is to get your corpus split up into useful sub-folders whose names mean something to build sub-folder is not checked everything goes into the copy to or move to folder. you.) If And you may have the program add (useful if as with the .txt · BNC World Edition there are no file extensions) and/or convert it to Unicode. · You could also have any texts not containing the word Magwich copied to a specified folder. buttons are specific to those two editions of the BNC and The load BNC World and load BNC XML December, 2015. Page 361</p> <p><span class="badge badge-info text-white mr-2">377</span> 362 Utility Programs read text files with similar names which you will find in your folder. Documents\wsmith6 355 Text Converter Contents . See also: Convert within the text file 10.7.6 Your choices here are 5: 1. cut out a header and/or 2. make one change only 3. insert numbering 4. replace some problem characters 364 to see. 5. use a script to determine a whole set of changes. There is an sample 10.7.6.1 conversion file Prepare your Text Converter conversion file using a plain text editor such as Notepad. Documents\wsmith6\convert.txt as a basis. You could use 419 in your original files, use the DOS editor to prepare the If you have accented characters conversion file if they were originally written under DOS and a Windows editor if they were written December, 2015. Page 362</p> <p><span class="badge badge-info text-white mr-2">378</span> 363 WordSmith Tools Manual in a Windows word-processor. Some Windows word processors can handle either format. There can be any number of lines for conversion, and each one can contain two strings, delimited with " " quotes, each of up to 80 characters in length. The Text Converter makes all changes in order, as specified in the Conversion File. Remember one alteration may well affect subsequent ones. Alterations that increase the original file Most changes reduce the size of an original. But Text Converter will cope even if you need to increase the original file -- as long as there's disk space! Tip To get rid of the <Enter> at line ends but not at paragraph ends, first examine your paragraph ends to see what is unique about them. If for example, paragraphs end with two <Enters>, use the following lines in your conversion file: "{CHR(13)}{CHR(10)}{CHR(13)}{CHR(10)}" -> "{%%}" (this line replaces the two <Enters> with {%%} .) (It could be any other unique combination. It'll be slightly faster if you make the search and the replacement the same length, as in this case, 4 characters) "{CHR(13)}{CHR(10)}" -> " " (this line replaces all other <Enters> with a space, to keep words separate) "{%%}" -> "{CHR(13)}{CHR(10)}{CHR(13)}{CHR(10)}" (this line replaces the {%%} combination with <Enter><Enter>, thus restoring the original paragraph structure) /S (this line cuts out all redundant spaces) 364 363 355 , Text Converter Contents . , syntax sample conversion file See also: 10.7.6.2 syntax 362 The syntax for a Conversion File is: · Only lines beginning / or " are used. Others are ignored completely. Every string for conversion is of the form "A" -> "B". That is, the original string, the one you're · searching for, enclosed in double quotes, is followed by a space, a hyphen, the > symbol, and the replacement string. You can use " (double quotes) and hyphen where you like without any need to substitute them, · in your search or replace " -> " but for obvious reasons there must not be a sequence like string. Removing all tags To remove all tags, choose "<*>" -> "" as your search string. Control Codes Control codes can be symbolised like this: {CHR(xxx)} where xxx is the number of the code. {CHR(10)} {CHR(13)} {CHR(9)} is a tab. To Examples: is a carriage-return, is a line-feed, which comes at the end of paragraphs and sometimes at the end of each line, <Enter> represent {CHR(13)}{CHR(10)} which is carriage-return followed immediately by line-feed. you'd type December, 2015. Page 363</p> <p><span class="badge badge-info text-white mr-2">379</span> 364 Utility Programs 159 Use for {CHR(34)} if you need to refer to double inverted commas. See search-word syntax more. Wildcards The search uses the same mechanism that Concord uses. You may use the same wildcards as in 159 . By default the search-and-replace operates on whole words. Concord search-word syntax Examples: with bk but won't replace books or textbook "book" -> "bk" will replace book book textbook with bk or books or will replace "*book" -> "bk" but won't replace textbooks book "book*" -> "bk" books with bk but won't replace textbook or will replace or textbooks To show a character is to be taken literally, put it in quotes (e.g. "*","<"). See below for use of the / L parameter. Unbounded, case Insensitive, Confirm, redundant Spaces, redundant <Enter>s /C stops to confirm you wish to go ahead before each change. 427 does an unbounded search (ensuring the alteration happens whether there's a word separator /U and then the bathe ). on either side or not) (/U "the" finds other, but also finds restaurant with hotel and /I does a case insensitive search (/I "restaurant" -> "hotel" replaces HOTEL with with Hotel , i.e. respecting case as far as possible). RESTAURANT and Restaurant You can combine these, e.g. /IC "the" -> "this" cuts out all redundant spaces. That is, it will reduce any sequence of two or more spaces to /S one, and it also removes some common formatting problems such as a lone space after a carriage- return or before punctuation marks such as .,; and ). /S can be used on a line of its own or in combination with other searches. cuts out all redundant <Enter>s. That is, it will reduce any sequence of two or more carriage- /E /E can be used on a line of return+line-feeds (what you get when you press Enter or Return) to one. its own or in combination with other searches. /L means both the search and replace strings are to be taken as literal. (Normally a sequence like < > are mark-up signals and # and * are <#*> would need quotes around each character because which is tricky! Put "<""#""*"">" special wildcard characters, thus /L at the start of the line to avoid this.) Documents\wsmith6 \convert.txt to see examples in use. See 355 See also: Text Converter Contents . sample conversion file 10.7.6.3 422 and paste it into notepad. You could copy all or part of this to the clipboard [ comment line -- put whatever you like here, it'll be ignored ] [ first a spelling correction ] December, 2015. Page 364</p> <p><span class="badge badge-info text-white mr-2">380</span> 365 WordSmith Tools Manual "responsable" -> "responsible" [ now let's change brackets from < > to [ ] and { } to ( ) ] "*<*" -> "[" "*>*" -> "]" "*}*" -> ")" "*{*" -> ")" /S [ that will clear all redundant spaces] is a sample conversion file for use with British The file Documents\wsmith6\convert.txt National Corpus text files. 355 See also: Text Converter Contents . 10.7.7 Convert format of entire text files To convert a series of whole text files from one format to another, choose one or more of these options: December, 2015. Page 365</p> <p><span class="badge badge-info text-white mr-2">381</span> 366 Utility Programs These formats allow you to convert into formats which will be suited to text processing. into Unicode: ... this is a better standard than ASCII or ANSI as it allows many more characters to be used, 430 suiting lots of languages. See Flavours of Unicode . December, 2015. Page 366</p> <p><span class="badge badge-info text-white mr-2">382</span> 367 WordSmith Tools Manual TXT file extensions: ... makes the filename end in .txt (so that Notepad will open without hassling you; Windows was baffled by the empty filenames of the BNC editions prior to the XML edition). If you choose this you will be asked whether to force .txt onto all files regardless, or only ones which have no file extension at all. curly quotes etc.: ... changes any curly single or double quote marks or apostrophes into straight ones, ellipses into three dots, and dashes into hyphens. (Microsoft's curly apostrophes differ from straight ones.) removing line-breaks ... replaces every end of line line-break with a space. Preserves any true paragraph breaks, which -- in other words two line-breaks one <Enter><Enter> you must ensure are defined (default = after the other with no words between them). 371 362 372 368 , MS Word , Word/Excel/PDF , convert within text files See also: Mark-up , non-Unicode 444 , Guide to handling the BNC documents December, 2015. Page 367</p> <p><span class="badge badge-info text-white mr-2">383</span> 368 Utility Programs Mark-up changes 10.7.7.1 removing all tags would convert The<DT><the> TreeTagger<NP><TreeTagger> is<VBZ> The ... into Treetagger is . Can plough through a copy of the whole BNC, for example, and make it readable. If you have specified a header string it will cut the header up to that point too. Uses the . selected span for looking for the next > when it finds a < word_TAG to <TAG>word The Helsinki corpus can come tagged like this (COCOA tags) the_D occasion_N of_P her_PRO$father's_N$ death_N and this conversion procedure will change it to <D>the <N>occasion <P>of <PRO$>her <N$>father's <N>death Note: this procedure does not affect underscores within existing <> markup. word_TAG to word<TAG> converts text like December, 2015. Page 368</p> <p><span class="badge badge-info text-white mr-2">384</span> 369 WordSmith Tools Manual It_PP is_VBZ easy_JJ or Stanford Log-linear POS tagger output like It/PP is/VBZ easy/JJ to It<PP> is<VBZ> easy<JJ> You will have to confirm which character such as _ or / divides the word from the tags. Note: before it starts, it will clear out any existing <> markup. swap tag and word converts text like It<PP> is<VBZ> easy<JJ> to <PP>It <VBZ>is <JJ>easy or vice-versa. In other words swapping the order of tags and words. The procedure effects a swap at each space in the non-tagged text sequence. Any tags which do not qualify a neighbouring word but for example a whole sentence or a paragraph should not be swapped, so fill in the box to the right with any such tags, using commas to separate <s>,</s>,<p>,</p> them, e.g. from column tagged The Stuttgart Tree Tagger produces output like this separating 3 aspects of each word with a <tab>: word pos lemma The the DT TreeTagger NP TreeTagger be is VBZ easy easy JJ to TO to VB use use SENT . . You will need to supply a template for your conversion. Template syntax and examples: 1. Any number in the template refers to the data in that column number. ( The is in column 1 above, DT in column 2 of the original.) 2. Only columns mentioned in the template get used in the final output. 3. Separate columns in your template with a / slash. 4. You can add letters and symbols if you like. 5. A space will get added after each line of your original. Examples: The<the><DT> · the template 1/<3>/<2> will produce with the cases above December, 2015. Page 369</p> <p><span class="badge badge-info text-white mr-2">385</span> 370 Utility Programs etc. Treetagger<Treetagger><NP> is<be><VBZ> the template <POS="2">/1 will produce · <POS="DT">The <POS="NP">Treetagger <POS="VBZ">is etc. It will present the text as running text, no longer in columns, but with a break every 80 characters. entities to characters é ... converts HTML or XML symbols which are hard to read such as é to ones like . Specify these in a text file. There is a sample file pre-prepared for you, in html_entities.txt, your Documents\wsmith6 folder; look inside and you'll see the syntax. XML simplification The idea is to remove any mark-up in XML data which you really do not wish to keep. For example, in the BNC XML edition you might wish to keep only the pos="*" mark-up and remove the c5 and hw attributes. To do so, press the Options button and complete for example like this: resulting in a saved XML file with a structure like this: December, 2015. Page 370</p> <p><span class="badge badge-info text-white mr-2">386</span> 371 WordSmith Tools Manual The procedure simply looks for all sections which begin and end with the required strings and delete any sections in between which contain the strings you specify in the remove these section. No further account of context is taken. Note that the order of attributes is not important, so we could have specified first. c5="*" 365 See also: Convert Entire Texts 10.7.7.2 Word, Excel, PDF from MS Word or Excel to .txt December, 2015. Page 371</p> <p><span class="badge badge-info text-white mr-2">387</span> 372 Utility Programs 444 This is like using "Save as Text" in Word or Excel. Handles .doc, .docx (Office 2007) and .xls files. from PDF ... into plain text. Not guaranteed to work with every .PDF as formats have changed and some are complex. To convert PDFs to plain text can be extremely tricky even if you own a licensed copy of the Adobe software (Adobe themselves created the PDF format in 1993). That is because PDF is a representation of all the dots, colours, shapes and lines in a document, not a long string of words. It can be very hard with an image of the text, to determine the underlying words and sentences. A second problem is that PDFs can be set with security rights preventing any copying, printing, editing etc. Other formats (.TXT, .DOC, .DOCX, .XML, .HTML, .RTF etc.) are OK in principle as they do not contain only an image but also store within themselves the words and sentences. 365 See also: Convert Entire Texts 10.7.7.3 non-Unicode Text December, 2015. Page 372</p> <p><span class="badge badge-info text-white mr-2">388</span> 373 WordSmith Tools Manual Codepage conversion This allows you to convert 1-byte based formats, for example from Chinese Big5 or GB2312, Japanese ShiftJis, Korean Hangul to Unicode. 365 See also: Convert Entire Texts 10.7.7.4 Other changes Unix to Windows Unix-saved texts don't use the same codes for end-of-paragraph as Windows-saved ones. encrypting using ... allows you to encrypt your text files. You supply your own password in the box to the right. When WordSmith processes your text files, e.g. when running a concordance it will restore the text as needed but otherwise the text will be unintelligible. Encrypted files get the file extension .WSencrypted . For example, if your original is wonderful.txt the copy will be wonderful.WSencrypted . Requires the safer copy to button above to be selected. lemmatising using 274 . If for example your source text has " " ... converts each file using a lemma file she was tired BE -> AM, WAS, WERE, IS, ARE , then you will get " she be tired " and your lemma file has Was she tired? " you'll get " Be she in your converted text file. Where your source text has " tired? " December, 2015. Page 373</p> <p><span class="badge badge-info text-white mr-2">389</span> 374 Utility Programs SRT Transcripts 215 . If using TED files converts SRT files such as those obtained from TED Open Translation Project you may need to add some seconds for the standard TED lead-in. Example These text files in English (.en), Spanish (.es) Italian (.it) and Japanese (.ja) originally downloaded got converted thus: 212 To enable Concord to play the .mp4 file to the same I had to change EloraHardy_2015-480p.mp4 Elora Hardy Magical houses made of bamboo.mp4 . Note the file sizes are bigger (converted title into Unicode) and the file-names no longer have two dots. This is so that Concord will find a match between its file-name and the transcripts in these 4 languages. 365 See also: Convert Entire Texts Text Converter: converting BNC XML version 10.7.8 The British National Corpus is a valuable resource but has certain problems as it comes straight off the cdrom: · it is in Unix format · it has entities like é to represent characters like é · its structure is opaque and file-names mean nothing You will find it much easier to use if you · convert it to Unicode · filter the files to make a useful structure December, 2015. Page 374</p> <p><span class="badge badge-info text-white mr-2">390</span> 375 WordSmith Tools Manual as explained at http://lexically.net/wordsmith/Handling_BNC/index.html The easiest way to do that is in two stages. Conversion: After choosing the texts, December, 2015. Page 375</p> <p><span class="badge badge-info text-white mr-2">391</span> 376 Utility Programs and when you press OK you'll be asked something like this December, 2015. Page 376</p> <p><span class="badge badge-info text-white mr-2">392</span> 377 WordSmith Tools Manual After the work is done you will see the BNC texts copied to a similar structure (in our case stemming from j:\temp ) Filter Choose the converted texts in the first window: de-activate conversion, December, 2015. Page 377</p> <p><span class="badge badge-info text-white mr-2">393</span> 378 Utility Programs and choose filtering like this: Eventually you should get folder structures like this: December, 2015. Page 378</p> <p><span class="badge badge-info text-white mr-2">394</span> 379 WordSmith Tools Manual 10.8 Viewer and Aligner 10.8.1 purpose This is a program for showing your text or other files, highlighting words of interest. You will see them in plain text format, with tag mark-up shown or hidden as in your tag settings. There are a 387 390 settings and options you can change. number of 381 aligned version of 2 or more texts, with alternate sentences or Its main use is to produce an paragraphs from each of them. December, 2015. Page 379</p> <p><span class="badge badge-info text-white mr-2">395</span> 380 Utility Programs 387 390 381 , an example of aligning , Viewer & Aligner options See also: Viewer & Aligner settings index 10.8.2 Explanations 379 What is the Viewer & Aligner and what's it for? 381 an example of aligning 390 Settings 387 Viewing Options 455 What to do if it doesn't do what I want... 392 Searching for Short Sentences 389 Joining/Splitting 384 Aligning a Dual Text 391 Finding translation mis-matches 390 The technical side... 2 see also : WordSmith Main Index December, 2015. Page 380</p> <p><span class="badge badge-info text-white mr-2">396</span> 381 WordSmith Tools Manual 10.8.3 aligning with Viewer & Aligner This feature aligns the sentences in two files. Translators need to study differences between an original and a translation. Other linguists might want it to study differences between two versions of 81 a text in the same language. Students of different languages can use it as they might use dual language readings, to study closely the differences e.g. in word order. It helps you produce a new text which consists of the two files, with sentences interspersed. That way you can compare the translation with the original. Example Der Knabe sagte diesen Gedank en dem Schwesterchen, und diese folgte. Allein auch Original : (from Stifter's der Weg auf den Hals hinab war nicht zu finden. So k lar die Sonne schien, ... Bergkristall, translated by Harry Steinhauer, in German Stories, Bantam Books 1961) Translation: The boy communicated this thought to his sister and she followed him. But the road down the neck could not be found either. Though the sun shone clearly, ... Aligned text: <G1> Der Knabe sagte diesen Gedanken dem Schwesterchen, und diese folgte. <E1> The boy communicated this thought to his sister and she followed him. <G2> Allein auch der Weg auf den Hals hinab war nicht zu finden. <E2> But the road down the neck could not be found either. <G3> So klar die Sonne schien, ... <E3> Though the sun shone clearly, ... An aligned text like this helps you identify additions and omissions, normalisations, style changes, word order preferences. In this case the translator has chosen to avoid very close equivalence. 381 384 , Aligning and moving See also: an example of aligning example of aligning 10.8.4 How to do it -- a Portuguese and English example 387 ), and checking its 1. Read in your Portuguese text (eg. Hora da Estrela.TXT 389 392 " to help identify the way you like. Try "Unusual Lines sentences and paragraphs break oddities. 2. Save it December, 2015. Page 381</p> <p><span class="badge badge-info text-white mr-2">397</span> 382 Utility Programs and it will (by default) get your filename .VWR, eg. Hora da Estrela.VWR. (It is important to do that, as a .VWR file knows the language, colour settings etc. and the cleaning up work you've done, whereas the .TXT file is just the original text file you read in.) 3. Do the same steps 1 and 2 for your English text -- you will now have e.g. Hour of the . Star.VWR Hora de la Estrella.txt giving Hora 4. You could if desired repeat with the Spanish -- , (or German, Russian, Arabic, etc.). de la Estrella.VWR 5. Now open your Portuguese Hora da Estrela.VWR File | Merge 6. and then December, 2015. Page 382</p> <p><span class="badge badge-info text-white mr-2">398</span> 383 WordSmith Tools Manual December, 2015. Page 383</p> <p><span class="badge badge-info text-white mr-2">399</span> 384 Utility Programs ) as the format. 7. Finally File | Save choosing Aligned files ( .ALI 10.8.5 aligning and moving You may well want to alter sentence ordering. The translator may have used three sentences where the original had only one. You can also merge paragraphs. December, 2015. Page 384</p> <p><span class="badge badge-info text-white mr-2">400</span> 385 WordSmith Tools Manual adjusting by dragging with the mouse To merge sentences or paragraphs, simply grab and drag it up to the next one above in the same language. Or use the Join button. Or press F4. To split a sentence or paragraph, choose the Split button or press Ctrl+F4. 101 you will want to save (Ctrl+F2) the results . Finally 380 See also: Viewer & Aligner contents 10.8.6 editing While Viewer & Aligner is not a full word-processor, some editing facilities have been built in to help deal with common formatting problems: Split: allows you to choose where a line should be divided in two. · Join down , Join up: these buttons merge a line with another one. You can achieve this also by · simply dragging. · Cut line: removes any blank lines. Trim: this goes through each sentence of the text, removing any redundant spaces -- where there · are two or more consecutive spaces they will be reduced to one. · Cut & Trim All does these actions on the whole text. · Edit opens up a window allowing you to edit the whole of the current sentence or paragraph. · Heading: allows you to treat a line as a heading, and if so makes it look bold. 392 Find unusual lines · : this identifies cases where a sentence or paragraph does not start with a 389 it to the one above, or where a line is capital letter or number -- you will probably want to join unusually short, etc. 392 Find short lines · You will then want to save (Ctrl+F2) your text. You can also: · open a new file for viewing (you can open any number of text files within Viewer & Aligner) 422 · copy a text file to the clipboard (select, then press Control+Ins) print the whole or part of the currently active text file · · search for words or phrases (press F12) languages 10.8.7 Each Viewer file ( .VWR ) has its own language. Each Aligner file ( .ALI ) has one language for each of the component sections. (They could all be the same, if for example you were analysing various different editions of a Shakespeare play they'd all be English.) The set of languages available is that 83 defined using the Languages Chooser . 84 You can change the language to one of your previously defined languages using the drop-down list. Here is an example where a Portuguese language plain TXT text file was opened and the default language was English. December, 2015. Page 385</p> <p><span class="badge badge-info text-white mr-2">401</span> 386 Utility Programs When Portuguese was chosen in the drop-down list, and agreed to, it was possible to save the result (as a .VWR file) so that henceforth it would know which language to use. 10.8.8 numbering sentences & paragraphs You can use the Viewer & Aligner to make a copy of your text with all the sentences and/or . paragraphs tagged with <S> and <P> 102 To do this, simply read in the text file in, choose Edit | Insert Tags , then save it as a text file . 380 See also: Viewer & Aligner contents December, 2015. Page 386</p> <p><span class="badge badge-info text-white mr-2">402</span> 387 WordSmith Tools Manual 10.8.9 options Mode: Sentence/Paragraph This switches between Sentence mode and Paragraph mode. In other words you can choose to view your text files with each row of the display taking up a sentence or a paragraph. Likewise, you can make an dual aligned text by interspersing either paragraphs or sentences. The 389 other functions (e.g. ) work in the same way in either mode. joining, splitting Colours The various texts in your aligned text will have different colours associated with them. Colours can button. be changed using the 10.8.10 reading in a plain text In Viewer and Aligner, choose and select your plain text file. File | Open, and you may see this sort of thing in Sentence view , December, 2015. Page 387</p> <p><span class="badge badge-info text-white mr-2">403</span> 388 Utility Programs , or in Paragraph view 389 Edit it, as necessary, e.g. splitting or merging paragraphs or sentences. There's a taskbar with buttons to help above the text. Ensure the language is right: December, 2015. Page 388</p> <p><span class="badge badge-info text-white mr-2">404</span> 389 WordSmith Tools Manual And save it as a .VWR file: . 381 See also: example of aligning joining and splitting 10.8.11 Joining The easiest way to join two sentences is simply to drag the one you want to move onto its ) neighbour above. Or select the lower of the two and press F4 or use the button ( In this example, sentence 60 in Portuguese got represented as two sentences, 60 and 61, in English. Splitting in two To split a sentence, press . You will get a list of the words. Click on the word which should end the sentence, then press OK. example December, 2015. Page 389</p> <p><span class="badge badge-info text-white mr-2">405</span> 390 Utility Programs This will insert the words which follow ( I need others etc.) into a new line below. 380 Viewer & Aligner contents See also: settings 10.8.12 1. What constitutes a "short" sentence or paragraph (default: less than 25 characters) 2. Whether you want to do a lower-case check when Finding Unusual Lines The settings are standard ones found in most of the Tools: 60 Colours 78 Font 80 Printing 124 Text Characteristics 113 Review all Settings technical aspects 10.8.13 When is a sentence not a sentence? There is no perfect mechanical way of determining sentence-breaks. For example, a heading may well have no final full stop but would normally not be considered part of the sentence which follows it. And a sentence may often have no final full stop, if what follows it is a list of items. The algorithm used by Viewer & Aligner is: a sentence ends when it meets the requirements December, 2015. Page 390</p> <p><span class="badge badge-info text-white mr-2">406</span> 391 WordSmith Tools Manual 426 . The same routine is used as in WordList. explained in the definition of a sentence : Consider this chunk from A Tale of Two Cities "Wo-ho!" said the coachman. "So, then! One more pull and you're at the top and be damned to you, for I have had trouble enough to get you to it! - Joe!" Viewer & Aligner will mistakenly consider Joe! as a separate sentence, but handles "Wo- - ho!" said the coachman. as one: though the program would split it in two if the word after had a capital lettter (e.g. in Wild Bill, the coachman, said. ) ho! Viewer & Aligner cannot therefore be expected to handle all sentence boundaries exactly as you I saw Mr. Smith. would. ( would be considered two sentences; several headings may be bundled 392 together as one sentence.) For this reason you can choose Find Short Sentences to seek out any odd one-word sentences. 380 See also: Viewer & Aligner contents 10.8.14 translation mis-matches Viewer & Aligner can help find cases where alignment has slipped (one sentence having been . This searches translated as two or three). One method is to use the menu item Match by Capitals is mentioned in sentences 25 of the Paris for matching proper nouns in the two versions: if say source text and not in sentence 25 of the translation but in sentence 27, it is very likely that some slippage has occurred. Viewer & Aligner will search forwards from the current text sentence on, and will tell you where there's a mis-match. You should then search back from that point to find where the sentences start to diverge. It may be useful to sample every 10 or every 20 to speed up the search for slippage. 389 389 and/or edit the text as appropriate, then save it. When you find the problem, un-join or join 390 392 , Finding unusual sentences See also: Viewer & Aligner The technical side... , 380 contents troubleshooting 10.8.15 Can't see the whole sentence or paragraph to "auto-size" the lines in your display. This adjusts line heights according to the current Press highlighted column of data. Can't see the whole text file Press to "refresh" the display. Don't like the colours December, 2015. Page 391</p> <p><span class="badge badge-info text-white mr-2">407</span> 392 Utility Programs Change colours using . The colours initially used for each language version in the dual-language window are the same colours as used for primary sorting and secondary sorting in Concord . 380 Viewer & Aligner contents See also: unusual lines 10.8.16 It can be useful to seek unusually short sentences to see whether your originals have been handled as you want. Because Viewer & Aligner uses full stops, question marks and exclamation marks as sentence-boundary indicators, you will find a string like "Hello! Paul! Come here!" is broken into 3 very short sentences. Depending on your purposes you may wish to consider these as one sentence, e.g. if a translator has translated them as one ("Oi, Paulo, venha cá!") . This function can also find lower-case lines: where a sentence or paragraph does not start with a capital letter or number -- you will probably want to join it to the one above. This problem is common if the text has been saved as "text only with line breaks" (where an <Enter> comes at the end of each line whether or not it is the end of a paragraph.) Seeking Use the Find Unusual Toolbar menu item ( ) and then press Start Search . Viewer & Aligner will 389 go to the next possibly problematic sentence or paragraph and you will probably want to join it by pressing Join Up (to the one above), Join Down, or Skip. December, 2015. Page 392</p> <p><span class="badge badge-info text-white mr-2">408</span> 393 WordSmith Tools Manual "Case check" switches on or off the search for lower-case sentence starts. The number (25 in the example above) is for you to determine the number of characters counting as a short sentence or paragraph. 390 391 390 , Viewer & , The technical side... See also: Settings , Finding translation mis-matches 380 Aligner contents WSConcGram 10.9 aims 10.9.1 394 , essentially related pairs, triplets, quadruplets (etc.) of words A program for finding concgrams which are related. December, 2015. Page 393</p> <p><span class="badge badge-info text-white mr-2">409</span> 394 Utility Programs 395 395 402 394 See also : definition of concgram , viewing the , running WSConcGram , settings , filtering 397 . output definition of a concgram 10.9.2 For years it has been easy to search for or identify consecutive clusters (n-grams) such as AT THE or TERM TIME . It has also been possible to find non-consecutive END OF, MERRY CHRISTMAS 180 by adapting searches to find context words TEA of within the horizons STRONG linkages such as 202 . The concgram procedure takes a whole corpus of text and finds all sorts of combinations like the ones above, whether consecutive or not. Cheng, Greaves & Warren (2006:414) define a concgram like this For our purposes, a ‘concgram’ is all of the permutations of constituency variation and positional variation generated by the association of two or more words. This means that the associated words comprising a particular concgram may be the source of a number of ‘collocational patterns’ (Sinclair 2004:xxvii). In fact, the hunt for what we term ‘concgrams’ has a fairly long history dating back to the 1980s (Sinclair 2005, personal communication) when the Cobuild team at the University of Birmingham led by Professor John Sinclair attempted, with limited success, to devise the means to automatically search for non-contiguous sequences of associated words. © ) program was "a search-engine, Essentially what they were seeking in developing the ConcGram ( which on top of the capability to handle constituency variation (i.e. AB, ACB), also handles positional variation (i.e. AB, BA), conducts fully automated searches, and searches for word associations of any size." (2006:413) WSConcGram is developed in homage to this idea. December, 2015. Page 394</p> <p><span class="badge badge-info text-white mr-2">410</span> 395 WordSmith Tools Manual 402 395 395 417 See also: bibliography , settings , running WSConcGram , viewing the output , filtering 397 . 10.9.3 settings The settings are found in the main Controller. 10.9.4 generating concgrams . File | New To start, as usual, choose Getting Started In the window, first choose an existing Index, as here where an index based on the works of Dickens has been selected. December, 2015. Page 395</p> <p><span class="badge badge-info text-white mr-2">411</span> 396 Utility Programs To generate the concgrams, the program will then need to build some further files based on the existing index files: There are two steps simply because there's a lot of work if the original index is large. You can stop after the first stage and resume the next day if you wish. With a modern PC and a source text corpus of only a few million words, though, it should be possible to generate the files in a matter of a few minutes. Build steps As you see above, some large additional files have been generated at the end of the two marked on the buttons in the top window. All items which are found together at least as often as set in the Index settings (here 5 times) will be saved as potential members of each concgram. 397 Now, choose Show to view the results. (Or, as usual, right-click the main WSConcgram window and choose last file ). December, 2015. Page 396</p> <p><span class="badge badge-info text-white mr-2">412</span> 397 WordSmith Tools Manual 10.9.5 viewing concgrams When you first open a concgram file created by WSConcGram, it will look something like this one 395 It'll appear (by default) in frequency order as set in the settings but you can sort it by pressing the Word and Freq headers, and can search for items using the little box above the list. December, 2015. Page 397</p> <p><span class="badge badge-info text-white mr-2">413</span> 398 Utility Programs PIP (the hero of Great To get a detailed set of concgrams, double-click an item such as ), or drag it to the list-box above. Then press the concgram button beside that. Expectations You then get a tree view like this where similar items are grouped. Each branch of the tree shows how many sub-items and how many items of its own it has. The other controls are used for suspending lengthy processing ( ) changing from a tree-view to a 402 ), for filtering ( ), clearing filters ( ), and showing more or less of list, for concordancing ( the tree ( ). So if you prefer a plain list, click as Tree to view like this: December, 2015. Page 398</p> <p><span class="badge badge-info text-white mr-2">414</span> 399 WordSmith Tools Manual You may if you like select several items like this: December, 2015. Page 399</p> <p><span class="badge badge-info text-white mr-2">415</span> 400 Utility Programs but do note that the concgrams will have to contain all of the words selected. 402 After filtering appropriately and pressing the Concordance button December, 2015. Page 400</p> <p><span class="badge badge-info text-white mr-2">416</span> 401 WordSmith Tools Manual If you right-click and choose Show Details you'll get to see the details of any section of the tree you have selected: December, 2015. Page 401</p> <p><span class="badge badge-info text-white mr-2">417</span> 402 Utility Programs where you see the various forms and the filename(s) they came from. 10.9.6 filtering concgrams In order to select which items are "associated", we need some sort of suitable statistical procedures. The members of each concgram are at present merely associated by co-occurring at 395 least a certain number of times as explained in generating them The Filtering settings in the Controller allow you to specify, for example, that you want to see only those which are associated with a MI (mutual information) score of 2.0 or a Log Likelihood score of 3.0. December, 2015. Page 402</p> <p><span class="badge badge-info text-white mr-2">418</span> 403 WordSmith Tools Manual Ensure the statistics you need are checked and set to suitable thresholds, and decide whether all the thresholds have to be met (in the case above both MI and Log Likelihood would have to score 3.0 at least) or any of them (in the case above MI at 3.0 or above or Log Likelihood at 3.0 or above). You can also optionally insist on certain words being in your filtered results. When you press the filter button ( ), you will see something like this: December, 2015. Page 403</p> <p><span class="badge badge-info text-white mr-2">419</span> 404 Utility Programs where the items which meet the filter requirements are separated out and selected ready for concordancing; any others are hidden. To the right you see that the head-word CAESAR here relates to AND HE, HER, I , ANTONY etc. above the thresholds set. 10.9.7 exporting concgrams With concgram data loaded, you may wish to export it to a plain text file which can be imported into 307 Excel or imported into a WordSmith word-list . Choose Compute | WordList and you will be offered choices like these. The suggested filename is based on your concgram data. December, 2015. Page 404</p> <p><span class="badge badge-info text-white mr-2">420</span> 405 WordSmith Tools Manual 10.10 Character Profiler 10.10.1 purpose The point of it... Character Profiler , a tool to help find out which characters are most frequent in a text or a set of texts. The purpose could be to check out which characters are most frequent (e.g. in normal English text the letter E followed by T will be most frequent as shown below), or it could be to check whether your text collection contains any oddities, such as accented characters or curly apostrophes you weren't expecting. The first 32 codes used in computer storage of text are "control characters" such as tabs, line-feeds and carriage-returns. A plain .txt version of a text should only contain letters, numbers, punctuation and tabs, line-feeds and carriage-returns -- if there are other symbols you do not file which is really an old WordPerfect or Word .doc in disguise. .txt recognise you may have a It would enable you to discover the most used characters across languages, as in this screenshot: For further details see http://lexically.net/downloads/corpus_linguistics/1984_characters.xls . 10.10.2 profiling characters How to do it 1. Choose one or more texts or a folder. You can type in a complete filename (including drive and folder), and can use wildcards such as *.txt , or you can browse to find your text or folder. 2. If you want to study one text only, just choose one text, but you may choose a whole folderful or more by using the "sub-folders too" option. . 3. Press Analyse December, 2015. Page 405</p> <p><span class="badge badge-info text-white mr-2">421</span> 406 Utility Programs Source Text The display shows details of your selected text, and if you click the tab you can see the original text. (If you have analysed a whole set of text files the Source Text tab will show only that last one.) Legend code the Unicode code of character the character distinguishing punctuation, digits, letters type % percentage of the total number of characters in the text(s) freq. number of occurrences of that character <Tab> etc. control characters indicated in red. 1st Position number of each letter-character occurring in word-initial position 2nd number found in second position in any word etc. Note that 8th will only be able to count letter frequencies for words at least 8 letters long, while 1st or 2nd will handle nearly all words. December, 2015. Page 406</p> <p><span class="badge badge-info text-white mr-2">422</span> 407 WordSmith Tools Manual Sort Click the header to sort the data: The letter E (upper and lower case merged) here represents nearly ten percent of all letters, closely followed by T . If sorted by 1st position in the word, however, in frequency. Presumably the ranking the letter E comes after T,A,I,S,O,W,C,B,P,H,M and F of T reflects the frequency in English of the and A of a . December, 2015. Page 407</p> <p><span class="badge badge-info text-white mr-2">423</span> 408 Utility Programs Copy Copies the data to the clipboard, ready to be pasted for example into Excel. 408 See also: settings 10.10.3 profiling settings The top two boxes allow you to choose a font for your display. Most fonts can only represent some of the Unicode characters, so you may need to experiment to determine which is best for your language. (Character Profiler translates any text into Unicode whether or not it is in Unicode originally, and tells you which form it is in on the Results tab.) Header to cut If you've typed something in here such as </Header> , the program treats all the text before that as a header to be excluded from analysis.. Copy letter characters only Check this one to force the copying to the clipboard to copy only data of letters, ignoring punctuation and digits. Merge lower and UPPER case Check this one to convert all text to upper case. December, 2015. Page 408</p> <p><span class="badge badge-info text-white mr-2">424</span> 409 WordSmith Tools Manual 10.11 Chargrams 10.11.1 purpose The point of it... 426 Chargrams , (sequences of N characters) are a tool to help find out which chargrams most frequent in a text or a set of texts. The purpose could be to check out which chargrams are most frequent e.g. in word-initial position, in the middle of a word, or at the end. These are 3-letter chargrams occurring in word-final position. is a well-known ending in ING English; HAT is is a frequent 3-letter sequence at the end of words too. How does it work? Chargrams are computed by taking only the valid characters of text. If a text contained "In 1845 there was a princess", the 3-character chargrams considered would be THE, HER, ERE, WAS, PRI, RIN, INC, NCE, CES, ESS . The positions are computed in relation to the original words, so is word-initial while ESS is word-final, and RIN is medial. THE If including punctuation, the sequences would include IN_, N_1, _18, 184 etc. too. 10.11.2 chargram procedure How to do it Choose File | New in the Chargrams menu. Choose your texts as with the other Tools. December, 2015. Page 409</p> <p><span class="badge badge-info text-white mr-2">425</span> 410 Utility Programs Then press the button to make a chargram list. 413 See also : settings . display 10.11.3 The display is similar to that in the WordList tool. Contexts This column shows the word-contexts for each chargram. You can double-click to see the whole list. December, 2015. Page 410</p> <p><span class="badge badge-info text-white mr-2">426</span> 411 WordSmith Tools Manual Sorting You can sort by clicking a header. This offers you two of the columns to sort on, a primary sort and then where values on the primary sort are the same a secondary sort. In this example the user is choosing the number of texts in descending order and then word-position in ascending order. This gave the following list (extract): December, 2015. Page 411</p> <p><span class="badge badge-info text-white mr-2">427</span> 412 Utility Programs Word-initial chargrams 1-15 and mid-word chargrams 16 onwards all occurred in all 100% of the texts selected. Concordancing As in many other Tools, you can concordance selected items by choosing Compute | Concordance in the menu. In the case above I wondered about the context word cou December, 2015. Page 412</p> <p><span class="badge badge-info text-white mr-2">428</span> 413 WordSmith Tools Manual ... which clearly shows speech reformulation. 413 See also : settings . 10.11.4 settings Settings are found in the main Controller. You can set minimum and maximum token frequencies for the chargrams to be included in the results, a minimum and maximum number of texts they must appear in, the length in characters (e.g. 3 to 4 characters). Context words As the chargrams are selected, note is taken of the word which they are found in. Here you can determine a minimum number of times for each chargram to appear in a given word for that word to be listed, and a maximum number of context words per chargram to be collected. (Storing lots of extra words will use up system memory so a default of 20 or 50 may be reasonable. Word position By default chargrams in all three positions (word-initial, word-medial and word-final) will be collected. If you check the ignore word position box, word positions get merged. Include punctuation December, 2015. Page 413</p> <p><span class="badge badge-info text-white mr-2">429</span> 414 Utility Programs Allows chargrams of all characters (symbols, punctuation etc.) to be included. Spaces get replaced by underscores. Include digits Allows chargrams of digits as well as alphabetic characters. Ignore low-frequency context chargrams This setting allows us to filter out any chargrams which do not occur in many contexts. As shown here, any chargrams not occurring in at least 4 context word types will get eliminated. If, for example, a chargram has been found in 5 context word-types then the chargram is included in your list. But if in only 1 of these it is found occurring at least 4 times (i.e. found in the same context word-type recurring in the texts at least 4 times), you will see one context word only in the contexts column 410 See also: chargrams display December, 2015. Page 414</p> <p><span class="badge badge-info text-white mr-2">430</span> WordSmith Tools Manual Reference Section XI</p> <p><span class="badge badge-info text-white mr-2">431</span> 416 Reference Reference 11 11.1 acknowledgements WordSmith Tools has developed over a period of years. Originally each tool came about because I wanted a tool for a particular job in my work as an Applied Linguist. Early versions were written for DOS, then Windows Ô came onto the scene. One tool, Concord , had a slightly different history. It developed out of MicroConcord which Tim Johns and I wrote for DOS and which Oxford University Press published in 1993. Ô Pascal with the time-critical sections in The first published version was written in Borland Assembler. Subsequently the programs were converted to Delphi Ô 16-bit; this is a 32-bit only version written in Delphi XE and still using time-critical sections in Assembler. I am grateful to · lots of users who have made suggestions and given bug reports, for their feedback on aspects of the suite (including bugs!), and suggestions as to features it should have. · generations of students and colleagues at the School of English, University of Liverpool, the MA Programme in Applied Linguistics at the Catholic University of São Paulo, colleagues and students at Aston University. · Audrey Spina, Élodie Guthmann and Julia Hotter for their help with the French & German versions of WS 4.0; Spela Vintar's student for Slovenian; Zhu Yi and others at SFLEP in Shanghai for Mandarin for WS 5.0. · Robert Jedrzejczyk (http://prog.olsztyn.pl/paslibvlc) for his PasLibVCLPlayer which enables WordSmith to play video. Researchers from many other countries have also acted as alpha-testers and beta-testers and I thank them for their patience and feedback. I am also grateful to Nell Scott and other members of my family who have always given valuable support, feedback and suggestions. Mike Scott 425 my contact address WordSmith ideas for developing Feel free to email me at with any further . Tools API 11.2 It is possible to run the WordSmith routines from your own programs; for this there's an API If you know a programming language, you can call a .dll published. which comes with WordSmith and ask it to create a concordance, a word-list or a key words list, which you can then process to suit your own purposes. 31 Easier, however, is to write a very simple batch script which will run WordSmith unattended. 67 See also : custom processing December, 2015. Page 416</p> <p><span class="badge badge-info text-white mr-2">432</span> 417 WordSmith Tools Manual bibliography 11.3 Aston, Guy, 1995, "Corpora in Language Pedagogy: matching theory and practice", in G. Cook & B. Seidlhofer (eds.) Principle & Practice in Applied Linguistics: Studies in honour of H.G. Widdowson , Oxford: Oxford University Press, 257-70. The BNC Handbook Aston, Guy & Burnard, Lou, 1998, , Edinburgh: Edinburgh University Press. Biber, D., S. Johansson, G. Leech, S. Conrad and E. Finegan, 2000, Longman Grammar of Spok en and Written English , Harlow: Addison Wesley Longman. Clear, Jeremy, 1993, "From Firth Principles: computational tools for the study of collocation" in M. Baker, G. Francis & E. Tognini-Bonelli (eds.), 1993, Text and Technology: in honour of John Sinclair , Philadelphia: John Benjamins, 271-92. Cheng, Winnie, Chris Greaves & Martin Warren, 2006, From n-gram to skipgram to concgram. , Vol .11, No. 4, pp. 411-433. International Journal of Corpus Linguistics Dunning, Ted, 1993, "Accurate Methods for the Statistics of Surprise and Coincidence", Computational Linguistics , Vol 19, No. 1, pp. 61-74. Fillmore, Charles J, & Atkins, B.T.S, 1994, "Starting where the Dictionaries Stop: The Challenge of Corpus Lexicography", in B.T.S. Atkins & A. Zampolli, Computational Approaches to the Lexicon , Oxford:Clarendon Press, pp. 349-96. Katz, Slava, 1996, Distribution of Common Words and Phrases in Text and Language Modelling, Natural Language Engineering 2 (1), 15-59 Murison-Bowie, Simon, 1993, MicroConcord Manual: an introduction to the practices and principles of concordancing in language teaching , Oxford: Oxford University Press. Nakamura, Junsaku, 1993, "Statistical Methods and Large Corpora: a new tool for describing text types" in M. Baker, G. Francis & E. Tognini-Bonelli (eds.), 1993, Text and Technology: in , Philadelphia: John Benjamins, 293-312. honour of John Sinclair Statistics for Corpus Linguistics , Edinburgh: Edinburgh University Press. Oakes, Michael P. 1998, Scott, Mike, 1997, "PC Analysis of Key Words - and Key Key Words", System , Vol. 25, No. 2, pp. 233-45. Textual Patterns: k eyword and corpus analysis in language Scott, Mike & Chris Tribble, 2006, education , Amsterdam: Benjamins. Sinclair, John M, 1991, Corpus, Concordance, Collocation , Oxford: Oxford University Press. Stubbs, Michael, 1986, "Lexical Density: A Technique and Some Findings", in M. Coulthard (ed.) Talking About Text: Studies presented to David Brazil on his retirement, Discourse Analysis Monograph no. 13 , Birmingham: English Language Research, Univ. of Birmingham, 27-42. Stubbs, Michael, 1995, "Corpus Evidence for Norms of Lexical Collocation", in G. Cook & B. Seidlhofer (eds.) Principle & Practice in Applied Linguistics: Studies in honour of H.G. Widdowson , Oxford: Oxford University Press, 245-56. Tuldava, J. 1995, Methods in Quantitative Linguistics , Trier: WVT Wissenschaftlicher Verlag Trier. Youlmans, Gilbert, 1991, "A New Tool for Discourse Analysis: the vocabulary-management profile", Language , V. 67, No. 4, pp. 763-89. UCREL's log likelihood information 11.4 bugs All computer programs contain bugs. You may have seen a "General Protection Fault" message when using big expensive drawing or word-processing packages. If you see something like this, December, 2015. Page 417</p> <p><span class="badge badge-info text-white mr-2">433</span> 418 Reference then you have an incompatibility between sections of WordSmith. You have probably downloaded a fresh version of some parts of WordSmith but not all, and the various sub-programs are in conflict... The solution is a fresh download. http://lexically.net/wordsmith/version6/faqs/ updating_or_reinstalling.htm explains. Otherwise you should get a report popping up, giving "General" information about your PC and "Details" about the fault. This information will help me to fix the problem and will be saved in a small text file called wordsmith.elf, concord.elf, wordlist.elf , etc. When you quit the program, you will be offered a chance to email this to me. The first thing you'll see when one of these happens is something like this: You may have to quit when you have pressed OK, or WordSmith may be able to cope despite the problem. Usually the offending program will be able to cope despite the bug or you can go straight back into 4 it without even needing to quit the main WordSmith Tools Controller , retrieve your saved results 101 from disk, and resume. If that doesn't work, try quitting WordSmith Tools overall, or quit Windows and then start it up again. When you press OK, your email program should have a message with a couple of attachments to send to me. The email message will only get sent when you press Send in your email program. It is only sent to me and I will not pass it on to anyone else. Read it first if you are worried about revealing your innermost secrets ... it will tell me the operating system, the amount of RAM and hard disk space, the version of WordSmith, and some technical details of routines December, 2015. Page 418</p> <p><span class="badge badge-info text-white mr-2">434</span> 419 WordSmith Tools Manual which it was going through when the crash occurred. 461 error messages These warn you about problems which occur as the program works, e.g. if there's no room left on your disk, or you type in an impossible file name or a number containing a comma. 31 455 , troubleshooting . See also: logging 11.5 change language If you have results computed with the wrong language setting, that can affect things, e.g. a key word 315 listing depends on finding the words in the right order . To redefine the language of your data, , and in the resulting window choose Edit | Change Language Change once you have chosen a suitable alternative. If you choose a different one from the list press 124 of Alternatives, your Language and Text settings in the main Controller will change too. In this Change will change the language to Polish. screenshot, pressing 11.6 Character Sets 11.6.1 overview 444 You need "plain text" in WordSmith. Not Microsoft Word .doc files -- which contain text and a whole lot of other things too that you cannot normally see. If you are processing English only, your texts can be in ASCII, ANSI or Unicode; WordSmith handles both formats. If in other languages, read on... To handle a text in a computer, programs need to know how the text is encoded. In its processing, the software sees only a long string of numbers, and these have to match up with what you and I can recognise as "characters". For many languages like English with a restricted alphabet, encoding can be managed with only 1 "byte" per character. On the other hand a language like Chinese, which draws upon a very large array of characters, cannot easily be fitted to a 1-byte system. Hence the creation of other "multi-byte" systems. Obviously if a text in English is encoded in a multi-byte way, it will make a bigger file than one encoded with 1 byte per character, and this is slightly wasteful of disk and memory space. So, at the time of writing, 1-byte character sets are still in very widespread use. UTF-8 is a name for a multi-byte method, widely used for Chinese, etc. December, 2015. Page 419</p> <p><span class="badge badge-info text-white mr-2">435</span> 420 Reference In practice, your texts are likely to be encoded in a Windows 1-byte system, older texts in a DOS 1- byte system, and newer ones, especially in Chinese, Japanese, Greek, in Unicode. What matters most to you is what each character looks like, but WordSmith cannot possibly sort words correctly, or even recognise where a word begins and ends, if the encoding is not correct. WordSmith has to know (or try to find out) which system your texts are encoded in. It can perform certain tests in the background. But as it doesn't actually understand the words it sees, it is much safer for you to convert to Unicode, especially if you process texts in German, Spanish, Russian, Greek, Polish, Japanese, Farsi, Arabic etc. Three main kinds of character set, each with its own flavours, are Windows, DOS, and Unicode. Tip 44 To check results after changing the code-page, select Choose Texts and View the file in question. If you can't get it to look right, you've probably not got a cleaned-up plain text file but one 444 straight from a word-processor. In that case, take it back into the word-processor (see here for how to do that in MS Word) and save it as text again as a plain text file in Unicode. 419 420 42 See also: Text Formats , Choosing Accents & Symbols , Accented characters ; Choosing 81 Language accents & symbols 11.6.2 163 you may need to insert symbols and accented characters into When entering your search-word your search-word, exclusion word or context word, etc. If you have the right keyboard set for your if not, just choose the symbol in the main Controller — version of Windows this may be very easy 4 by clicking. December, 2015. Page 420</p> <p><span class="badge badge-info text-white mr-2">436</span> 421 WordSmith Tools Manual Below, you will see which character has been selected with the current font (which affects which characters can be seen). You can choose a number of characters and then paste them into Concord, by right-clicking and choosing from the popup-menu: These options above show Greek, Hebrew, Thai and Bengali characters have been clicked. The last one ("Paste") is the regular Windows paste. 81 419 See also: Choosing Language , Change Language December, 2015. Page 421</p> <p><span class="badge badge-info text-white mr-2">437</span> 422 Reference 11.7 clipboard You can block an area of data, by using the cursor arrows and Shift, or the mouse, then press Ctrl +Ins or Ctrl+C to copy it to the clipboard. If you then go to a word processor, you can paste or 102 ("paste special") the blocked area into your text. This is usually easier than saving as a text file 97 to a file) and can (or printing also handle any graphic marks. Example 1. Select some data. Here I have selected 3 lines of a concordance, just the visible text, no Set or Filenames information. 2. Hold down Control and press Ins or C. In the case of a concordance, since concordance lines are quite complex, you will be asked picture whether you want a of the selected screen lines, which looks like this in MS Word: with the colours and font resembling those in WordSmith, and/or plain text, and if so how many characters: December, 2015. Page 422</p> <p><span class="badge badge-info text-white mr-2">438</span> 423 WordSmith Tools Manual Once you've pressed OK, the data goes to the Windows "clipboard" ready for pasting into any other application, such as Excel, Word, Notepad, etc. For all other types of lists, such as word-lists, the data are automatically placed in the Clipboard in both formats, as a picture and as text. You can choose either one and they will look quite different from each other! Choose "Paste Special" in Word or any other application to choose between these formats. and then, for the picture format December, 2015. Page 423</p> <p><span class="badge badge-info text-white mr-2">439</span> 424 Reference You will probably use this picture format for your dissertation and will have to in the case of plotted data. In this concordance, you get only the words visible in your concordance line (not the whole line). What you're pasting is a graphic which includes screen colours and graphic data. If you subsequently click on the graphic you will be able to alter the overall size of the graphic and edit each component word or graphic line (but not at all easily!). Note that if you select more lines than will subsequently fit on your page, MS Word may either shrink the image or only paste one pageful. as plain text Alternatively, you might want to paste as plain Unformatted Unicode text because you want to edit the concordance lines, eg. for classroom use, or because you want to put it into a spreadsheet 102 ™. Here the concordance or other data are copied as plain text, with a tab such as MS Excel between each column. The Windows plain text editor Notepad can only handle this data format. Microsoft Word will paste (using Shift+Ins or Ctrl+V) the data as text. It pastes in as many characters as you have chosen above, the default being 60. At first, the concordance lines are copied, but they don't line up very nicely. Use a non proportional font, such as Courier or Lucinda Console, and keep the number of characters per line down to some number like 60 or so -- then it'll look like this: At 10 point text in Lucida Console, the width of the text with 60 characters and the numbers at the left comes to about 14 cm., as you can see To avoid word-wrapping, set the page format in Word to landscape, or keep the number of characters per line down to say 50 or 60 and the font size to December, 2015. Page 424</p> <p><span class="badge badge-info text-white mr-2">440</span> 425 WordSmith Tools Manual 10. avoid the heading and numbers in WordList or KeyWords too? 36 . See advanced clipboard settings contact addresses 11.8 Downloads You can get a more recent version at our website . There are also some free extra downloads (programs, word lists, etc.) there too. And links to sources of free text corpora. Screenshots for screenshots of what visit http://lexically.net/wordsmith/support/get_started_guides.html WordSmith Tools can do. This may give you useful ideas for your own research and will give you a better idea of the limitations of WordSmith too! Purchase for details of suppliers. Visit http://lexically.net/wordsmith/purchasing.htm Complaints & Suggestions Best of all, join Google Groups WordSmith Tools group and post your idea there so others can see the discussion. Or email me (mike (at) lexically.net). Please give me as full a description of the problem you need to tackle as you can, and details of the equipment too. Please don't include any attachments over 200K in size. I do try to help but cannot promise to... 11.9 date format Date Format Japanese date format year_month_day_hour_minute. At least it is logical, going from larger to smaller. Why aren't URLs organised in a logical order too? 11.10 Definitions 11.10.1 definitions valid characters Valid characters include all the characters in the language you are working with which are defined (by Microsoft) as "letters", plus any user-defined acceptable characters to be included within a word 435 (such as the apostrophe or hyphen ). That is, in English, A, a,... Z, z will be valid ; or @ or _ won't. In Greek, δ will count as a valid character. In Thai, ฏ (to patak) will characters but be a valid character. words 427 at each end . sequence of valid characters with a word separator The word is defined as a A word can be of any length, but for one to be stored in a word list, you may set the length you prefer (maximum of 50 characters) -- any which exceed your limit will get + tagged onto them at that point. You can decide whether or not to include words including numbers (e.g. \$35.50 ) in text December, 2015. Page 425</p> <p><span class="badge badge-info text-white mr-2">441</span> 426 Reference 124 . characteristics token and type tok en to different words. So in This is my is used to refer to running words and type The term gets is book, it is interesting we have 7 tokens but only 6 different types because repeated. clusters A cluster is a group of words which follow each other in a text . The term phrase is not used here because it has technical senses in linguistics which would imply a grammatical relation between the 278 175 words in it. In WordList cluster processing there can be no or Concord cluster processing certainty of this, though clusters often do match phrases or idioms. See also: general cluster 448 . information sentences the full-stop, question-mark or exclamation-mark (.?!) and (equivalents The sentence is defined as in languages such as Arabic, Chinese, etc.) immediately followed by one or more word separators 427 and then a number or a currency symbol, or a letter in the current language which isn't lower- . Note: languages which do not distinguish between lower-case and upper-case characters do case not technically count any as lower case or upper case. (For more discussion see Starts and Ends 390 146 of Text Segments or Viewer & Aligner technical information .) paragraphs 146 Paragraphs are user-defined. See Starts and Ends of Text Segments for further details. headings 146 Headings are also user-defined -- see Starts and Ends of Text Segments . texts A text in WordSmith means what most non-linguists would call a text. In a newspaper, for example, there might be 6 or 7 "texts" on each page. This also means that a text = a file on disk. If it doesn't you're better off totally ignoring the "Texts" column in any WordSmith output. chargrams A chargram is a sequence of N consecutive valid characters (excluding digits and punctuation) found ABI,ABL,ABO in text. e.g. etc. In English the most frequent 3-chargrams are THE, ING, AND, . ION 240 243 236 124 , Associate , Key key-word See also: Setting Text Characteristics , Keyness December, 2015. Page 426</p> <p><span class="badge badge-info text-white mr-2">442</span> 427 WordSmith Tools Manual 11.10.2 word separators Conventionally one assumes that one word is distinguished from the next by the presence of spaces at either end. But WordSmith Tools also includes within word separators certain standard codes used by most word processors: page eject code (12), tabs (9), carriage return (13) and line feed 435 may optionally be considered to split words like self- (10), end-of-text (26). Besides, hyphens access into two words. Note that in Chinese and Japanese which do not separate words in this way, any WordSmith functions which require word-separation will not work unless you get your texts previously tagged with word-separators. 11.11 demonstration version WordSmith Tools offers all the facilities of the complete suite, The demonstration version of except that any screen which shows a list (of words in a word-list, or concordance lines, etc.) is limited to a small number of lines which can be shown or printed. (If you save data, all of it will be saved; it's just that you can't see it all in the demo version.) 21 425 451 See also: Installing , Contact Addresses . , Version Information 11.12 drag and drop You can get WordSmith to compute some results simply by dragging. If you have WordList open you can simply drag a text file onto it from Windows Explorer and it will create a word-list there and then using default settings. Or if it is not open, drag your text file to the Hamlet WordList6.exe file. Here, is being dragged onto the WordList tool. If you have KeyWords open you can simply drag a text file onto it from Windows Explorer. If you have a valid word list set as the reference corpus, it will compute the key words. Or if it is not open, drag your text file to the file, as in this screenshot where the KeyWords6.exe is being dragged onto the KeyWords file. Dickens novel Dombey and Son.txt .CNC ), a key word list If you drag a word-list made by WordList ( .LST ending), a concordance ( 4 , it will open it with the appropriate tool. ) etc. onto the Controller .KWS ( December, 2015. Page 427</p> <p><span class="badge badge-info text-white mr-2">443</span> 428 Reference 11.13 edit v. type-in mode Most windows allow you to press keys either · to edit your data (edit mode), or · to get quickly to a place in a list (type-in mode). 168 Concordance windows use key presses also for setting categories for the data, or for blanking 168 out the search word. 109 In type-in mode, your key-presses are supposed to help you get quickly to the list item you're to get to (or near to) theocracy in a word list. If you've typed in interested in, e.g by typing theocr 5 letters and a match is found, the search stops. Changing mode is done in the menu: Settings | Typing Mode: 168 See also: user-defined categories . 11.14 file extensions The standard file-extensions used in WordSmith are .cnc concordance file .lst word list .mut mutual information list .dcl detailed consistency list December, 2015. Page 428</p> <p><span class="badge badge-info text-white mr-2">444</span> 429 WordSmith Tools Manual .tokens, .types word list index file .kws key words file .kdb key word database file .base_pairs, .bas WSConcgram files e_index_cg .ali aligner list .vwr viewer list In the Controller's Main settings, or on installing, you can if you wish associate (or disassociate) the current file-types with WordSmith in the Registry. The advantage of association is that Windows will know what Tool to open your data files with. December, 2015. Page 429</p> <p><span class="badge badge-info text-white mr-2">445</span> 430 Reference finding source texts 11.15 For some calculations the original source texts need to be available. For example, for Concord to 165 show you more context than has been saved for each line, it'll need to re-read the source text. 251 , it needs to look at the source text to find out which For KeyWords to calculate a dispersion plot 247 . KWs came near each other and compute positions of each KW in the text and KW links If you have moved or deleted the source file(s) in the meantime, this won't be possible. 269 44 110 116 See also : Source texts , Editing filenames , Choosing source files , find files . 11.16 flavours of Unicode What is Unicode? What WordSmith requires for many languages (Russian, Japanese, Greek, Vietnamese, Arabic etc.) is Unicode. (Technically UTF16 Unicode, little-endian.) It uses 2 bytes for each character. One byte is not enough space to record complex characters, though it will work OK for the English alphabet and some simple punctuation and number characters. UTF8, a format which was devised for many languages some years ago when disk space was not suitable . limited and character encoding was problematic, is in widespread use but is generally That's because it uses a variable number of bytes to represent the different characters. A to Z will be only 1 byte but for example Japanese characters may well need 2, 3 or even more bytes to represent one character. December, 2015. Page 430</p> <p><span class="badge badge-info text-white mr-2">446</span> 431 WordSmith Tools Manual There are a number of different "flavours" of Unicode as defined by the Unicode Consortium . MS Word offers · Unicode · Unicode (Big-Endian) (generated by some Mac or Unix software) · Unicode (UTF-7) · Unicode (UTF-8) The last two are 1-byte versions, not really Unicode in my opinion. WordSmith wants the first of 365 these but should automatically convert from any of the others. If you are converting text , prefer Unicode (little-endian), UTF16. Technical Note There are other flavours too and there is much more complexity to this topic than can be explained here, but essentially what we are trying to achieve is a system where a character can be stored in the PC in a fixed amount of space and displayed correctly. Precomposed In a few cases in certain languages, some of your texts may have been prepared with a character A followed by ^ where the intention is for the software to display followed by an accent, such as them merged ( Â ), instead of using precomposed characters where the two are merged in the text 36 if you need to handle that situation. file. See the explanation in Advanced Settings folders\directories 11.17 Found in main Settings menu in all Tools. Default folders can be altered in WordSmith Tools or set 113 as defaults in wordsmith6.ini . December, 2015. Page 431</p> <p><span class="badge badge-info text-white mr-2">447</span> 432 Reference · Concordance Folder: for your concordance files. · KeyWords Folder: for your key-word list files. 101 WordList Folder: where you will usually save · your word-list files. 381 · Aligner: for your dual-text aligned work · Texts Folder: where your text files are to be found. 212 · Downloaded Media: where your sound & video files will be stored after downloading the first time from the Internet. · Settings: where your settings files (.ini files and some others) are kept. If you write the name of a folder which doesn't exist, WordSmith Tools will create it for you if 101 possible. (On a network, this will depend on whether you have rights to create folders and save files.) If you change your Settings folder, you should let WordSmith copy any .ini and other settings files which have been created so that it can keep track of your language preferences, etc. change according to which machine you're G:, H:, K: Note: in a network, drive letters such as on running from, so that what is H:\texts\my text.txt G:\texts\my text.txt on one terminal may be another. Fortunately network drives also have names structured like this: \\computer_name , with the advantage that . You will find that these names can be used by \drive_name\ WordSmith the same text files can be accessed again later. 21 If you run WordSmith from an external hard drive or a flash drive , where again the drive letter may change, you will find WordSmith arranges that if your folders are on that same drive they will 113 . change drive letter automatically once you have saved your defaults December, 2015. Page 432</p> <p><span class="badge badge-info text-white mr-2">448</span> 433 WordSmith Tools Manual Tip Use different folders for the different functions in WordSmith Tools. In particular, you may end up 237 of key making a lot of word lists and key word lists if you're interested in making databases words. It is theoretically possible to put any number of files into a folder, but accessing them seems to slow down after there are more than about 500 in a folder. Use the batch facility to produce very large numbers of word list or key words files. I would recommend using a folder to store .kdb files, and \keywords\genre1, \keywords\genre2, etc. \keywords .kws files for each genre. for the 430 . See also: finding source texts 11.18 formulae For computing collocation strength, we can use · the joint frequency of two words: how often they co-occur, which assumes we have an idea of how far away counts as "neighbours". (If you live in London, does a person in Liverpool count as a neighbour? From the perspective of Tokyo, maybe they do. If not, is a person in Oxford? Heathrow?) the frequency word 1 altogether in the corpus · the frequency of word 2 altogether in the corpus · 180 we consider for being neighbours the span or horizons · the total number of running words in our corpus: total tokens · Mutual Information Log to base 2 of (A divided by (B times C)) where A = joint frequency divided by total tokens B = frequency of word 1 divided by total tokens C = frequency of word 2 divided by total tokens MI3 Log to base 2 of ((J cubed) times E divided by B) where J = joint frequency F1 = frequency of word 1 F2 = frequency of word 2 E = J + (total tokens-F1) + (total tokens-F2) + (total tokens-F1-F2) B = (J + (total tokens-F1)) times (J + (total tokens-F2)) T Score (J - ((F1 times F2) divided by total tokens)) divided by (square root of (J)) December, 2015. Page 433</p> <p><span class="badge badge-info text-white mr-2">449</span> 434 Reference where J = joint frequency F1 = frequency of word 1 F2 = frequency of word 2 Z Score (J - E) divided by the square root of (E times (1-P)) where J = joint frequency S = collocational span F1 = frequency of word 1 F2 = frequency of word 2 P = F2 divided by (total tokens - F1) E = P times F1 times S Dice Coefficient (J times 2) divided by (F1 + F2) where J = joint frequency F1 = frequency of word 1 or corpus 1 word count F2 = frequency of word 2 or corpus 2 word count Ranges between 0 and 1. Log Likelihood 417 p. 170-2. based on Oakes 2 times ( a Ln a + b Ln b + c Ln c + d Ln d - (a+b) Ln (a+b) - (a+c) Ln (a+c) - (b+d) Ln (b+d) - (c+d) Ln (c+d) + (a+b+c+d) Ln (a+b+c+d) ) where a = joint frequency b = frequency of word 1 - a c = frequency of word 2 - a d := frequency of pairs involving neither word 1 nor word 2 and "Ln" means Natural Logarithm 289 , Mutual Information See also: this link from Lancaster University December, 2015. Page 434</p> <p><span class="badge badge-info text-white mr-2">450</span> 435 WordSmith Tools Manual 11.19 history list History List: many of the combo-boxes in WordSmith like this one for choosing a search-word remember what you type in so you can look them up by pressing the down arrow at the right. 11.20 HTML, SGML and XML These are formats for text exchange. The most well known is HTML, Hypertext Markup Language, used for distributing texts via the Internet. SGML is Standard Generalized Markup Language, used by publishers and the BNC ; XML is Extensible Markup Language, intermediate between the other two. All these standards use plain text with additional extra tags, mostly angle-bracketed, such as <h1> and </h1>. The point of inserting these tags is to add extra sorts of information to the text: ) supplying details of the authorship & edition 1 a header ( <head> <bold>, <italics> ) how it should display (e.g. 2 is the body of the text) 3 what the important sections are ( <h1> marks a heading, <body> 4 how special symbols should display (é corresponds to é) 131 See also: Overview of Tags 11.21 hyphens The character used to separate words. The item "self-help" can be considered as 2 words or 1 word, 124 . depending on Language Settings December, 2015. Page 435</p> <p><span class="badge badge-info text-white mr-2">451</span> 436 Reference 11.22 international versions WordSmith can operate with a series of interfaces depending on the language chosen. If you choose French this is what you see in all of WordSmith. 416 See also: acknowledgements December, 2015. Page 436</p> <p><span class="badge badge-info text-white mr-2">452</span> 437 WordSmith Tools Manual 11.23 limitations The programs in WordSmith Tools can handle virtually unlimited amounts of text. They can read text from CD-ROMs, so giving access to corpora containing many millions of words. In practice, the 447 and b) patience. limits are reached by a) storage You can have as many copies of each Tool running at any one time as you like. Each one allows you to work on one set of data. 132 or ones containing an asterisk can span up to 1,000 characters. Tags to ignore 137 , only the When searching for tags to determine whether your text files meet certain requirements first 2 megabytes of text are examined. For Ascii that's 2 million characters, for Unicode 1 million. Tip 447 Press F9 to see the "About" box -- it shows the version date and how much memory you have available. If you have too little memory left, try a) closing down some applications, b) closing WordSmithTools and re-entering. 437 See also: Specific Limitations of each Tool tool-specific limitations 11.23.1 Concord limitations You can compute a virtually unlimited number of lines of concordance using Concord. 159 , though you can specify an Concord allows 80 characters for your search-word or phrase 161 . unlimited number of concordance search-words in a search-word file 180 of 25 Each concordance can store an unlimited number of collocates with a maximum horizon words to left and right of your search-word. WordList limitations 270 A head entry can hold thousands of lemmas , but you can only join up to 20 items in one go using F4. Repeat as needed. 263 Detailed Consistency lists can handle up to 50 files. KeyWords limitations One key-word plot per key-word display. (If you want more, call up the same file in a new display window.) 251 247 -windows per key-word plot display: 20. number of link 241 number of windows of associates per key key-word display: 20. File Utilities: Splitter limitations Each line of a large text file can be up to 10,000 characters in length. That is, there must be an <Enter> from time to time! December, 2015. Page 437</p> <p><span class="badge badge-info text-white mr-2">453</span> 438 Reference Text Converter limitations There can be up to 500 strings to search-and-replace for each. Each search-string and each replace-string can be up to 80 characters long. An asterisk must not be the first or last character of the search-string. When the asterisk is used to retain information, the limit is 1,000 characters. Viewer & Aligner limitations when choosing texts, Viewer & Aligner will call up the first 10 If you choose the View option source text files selected. When choosing texts or jumping into the middle of a text (e.g. after choosing in Concord), Viewer & Aligner will only process 10,000 characters of each file, to speed things up in the case of very large files, but you can get it to "re-read" the file by pressing to refresh the display, after which it will read the whole text. 437 See also: General Limitations 11.24 links between tools Linkage with Word Processors, Spreadsheets etc. 422 selected information to the clipboard All the windows showing lists or texts can easily copy . (Use Ctrl+Ins or Ctrl/C to insert). Where you see this symbol, you can send any selected data straight to a new Microsoft Word™ document. Where you see an URL (such as http://lexically.net ) you can click to access your browser. Links between the various Tools 4 WordSmith Tools are linked to each other via wordsmith.exe (the one which The programs in 4 " in its caption, and is found in the top-left corner of your says " WordSmith Tools Controller 113 , stop lists, etc. handles all the defaults screen). This , such as colours, folders, fonts WordList you'll go straight to a concordance, In general, if you press Ctrl+C in KeyWords or computed using the current word and using the current files. Each Tool will send as much relevant information as possible to the Tool being called. This will include: the current word (the one highlighted in the scrolling window) and the text files where any current information came from. : after computing a word list based on 3 business texts, you discover that the word Example is more frequent than you had expected. You want to do a concordance on that word, using hopeful hold down Control and press C. Now you can see hopeful, the same texts. Place the highlight on 175 plot. whether hopeful is part of a 3 -word cluster , or view a dispersion 237 texts, you discover that using 300 business key words database : after computing a Example December, 2015. Page 438</p> <p><span class="badge badge-info text-white mr-2">454</span> 439 WordSmith Tools Manual bid company , shares etc. Place the word seems to be a key key-word, and that it's associated with the highlight on bid, press Control-C and a concordance will be computed using the same 300 texts. Now you can check out the contexts: is a bid for power, or is it part of a tendering bid process? Example : you have a concordance of green . Now press Control-W to generate a word list of the same text files. Press Control-K to compare this word list with a reference corpus list to see what the key words are in these text files. 11.25 keyboard shortcuts scrolling windows: Control+Home to top of scrollable list Control+End to end of scrollable list 109 if it's ordered type-in your search-word alphabetically: and if it scrolls Home -- to left edge End -- to right edge horizontally: hotkeys: block a section Shift-cursor keys F1 help Ctrl+F2 101 save results print preview F3 Ctrl+P print results F4 join entries unjoin Ctrl+F4 Alt+F5 mark entries for joining Shift+Alt+F5 unmark entry F5 refresh a list auto set row height in Concord Shift+Ctrl+F8 F6 315 re-sort Ctrl+F6 315 reverse word sort Shift+Ctrl+F6 word-length sort F7 view source text grow line height F8 shrink Ctrl+F8 F9 About box (shows version-date and memory availability) compute collocates F10 F11 choose texts compute concordance Ctrl+Shift+C Ctrl+C copy December, 2015. Page 439</p> <p><span class="badge badge-info text-white mr-2">455</span> 440 Reference find again Ctrl+F3 find next deleted entry Alt+D Ctrl+L layout & columns of data Ctrl+M play media file Ctrl+N new Ctrl+U undo Ctrl+V paste Ctrl+W close Alt+X e X it the Tool Ctrl+Z 129 deleted lines Zap delete Del Numeric - delete to the end Ins restore deleted entry Numeric + restore to the end 441 see also: Menu items and Buttons 11.26 machine requirements This version of WordSmith Tools is designed for machines with: at least 1GB of RAM · at least 200MB of hard disk space · · Windows™ XP or later, or an emulator of one of these if using an Apple Mac or Unix system. 447 448 . on a faster You will find it runs better machine, especially if there's plenty of RAM 21 on a fast computer better than on a slow You can run WordSmith from a memory stick computer. (You can run WordSmith on a tiny 10" screen laptop with Windows Starter and little power but all applications on those are slow and there is not much screen for your results.) for details on There is no Apple Mac version but see http://lexically.net/wordsmith/mac_intel.htm how to use WordSmith on a Mac. manual for WordSmith Tools 11.27 21 . The file install This help file exists in the form of a manual, which you get when you , is in Adobe Acrobat™ format. It has a table of contents and a fairly detailed ( wordsmith.pdf) to help me create). Most people find paper easier to index (which I used WordList and KeyWords deal with than help files! 425 You may find it useful to see screenshots of . listed here in action: ideas are WordSmith December, 2015. Page 440</p> <p><span class="badge badge-info text-white mr-2">456</span> 441 WordSmith Tools Manual 11.28 menu and button options These functions may or may not be visible in each Tool depending on the capacity of the Tool or the current window of data -- the one whose caption bar is highlighted. advanced allows access to advanced features associates 241 . opens a new window showing Associates auto-join 270 ) automatically. joins (lemmatises auto-size re-sizes each line of a display so that each one shows as much data as it should. Most windows have lines of a fixed size but some, e.g. in Viewer, allow you to adjust row heights. This adjusts line heights according to the current highlighted column of data. close (Ctrl+W) closes a window of data clumps 243 computes clumps in a keywords database regroup clumps 244 the clumps regroups clusters 175 computes concordance clusters . collocates 179 using concordance data. shows collocates compute 63 calculates a new column of data based on calculator functions and/or existing data. redo collocates recalculates collocates, e.g. after you've deleted concordance lines. column totals 62 of numerical data. computes totals, min, max, mean, standard deviation for each column concordance (Shift+Ctrl+C) within KeyWords, WordList, starts Concord and concordances the highlighted word(s) using the original source text(s). copy (Ctrl+C) 102 66 allows you to copy your data to a variety of different places (the printer, a text file , the 422 clipboard , etc.). edit 72 109 ). allows editing of a list or searches for a word (type-in search exit (Alt+X) December, 2015. Page 441</p> <p><span class="badge badge-info text-white mr-2">457</span> 442 Reference quits a Tool. edit or type-in mode alternates between edit and type-in mode. filenames opens a new window showing the file names from which the current data derived. If necessary you 113 . can edit them find files finds any text files which contain all the words you've marked. grow increases the height of all rows to a fixed size. See shrink ( ) below. help (F1) opens WordSmith Help (this file) with context-sensitive help. join 270 ). joins one entry to another e.g. sentences in Viewer, words in WordList (lemmatisation layout 87 : the colour of each column, whether to hide This allows you to alter many settings for the layout a column of data, typefaces and column widths. links 247 between words in a key-words plot. computes links mark 270 75 or finding files . marks an entry for joining match lemmas 270 any that checks each item in the list against ones from a text file of lemmatised forms and joins match. match list 92 , marking matches up the entries in the current list against ones in a "match list file" or template any found with (~). relation 276 289 computes mutual information or similar scores in a WordList index list . new... (Ctrl+N) 2 gets you started in the various Tools, e.g. to make a concordance, a word list, or a key words list. open... (Ctrl+O) gives you a chance to choose a set of saved results. patterns 207 . computes collocation patterns play media (Ctrl+M) 212 plays a media file . plot 191 251 or KeyWords plot opens a new window showing a Concord dispersion plot . December, 2015. Page 442</p> <p><span class="badge badge-info text-white mr-2">458</span> 443 WordSmith Tools Manual print preview (F3) previews your window data for printing (Ctrl+P); can print to file, which is equivalent to "save as text 102 ". redo undoes an undo. refresh (F5) re-draws the screen (in Viewer re-reads your text file). remove duplicates 207 removes any duplicate concordance lines. replace 113 where the source search & replace, e.g. to replace drive or folder data, when editing file-names texts have been moved. re-sort 208 252 or , KeyWords re-sorts lists (e.g. in frequency as opposed to alphabetical order) in Concord 315 WordList . ruler 251 shows/hides vertical divisions in any list; text divisions in a KeyWords plot . Click ruler in a menu 191 . to turn on or off or change the number of ruler divisions for a plot save (also Ctrl+F2) 101 using existing file-name; if it's a new file asks for file-name first. saves your data save as saves after asking you for a file-name. save as text saves as a .txt file: plain text. search 109 searches within a list. shrink reduces the height of all rows to a smaller fixed height. See grow ( ) above. statistics 298 . shows detailed statistics statusbar toggles on & off the "status bar" (at the bottom of a window, shows comments and the status of what has been done). summary statistics 66 , e.g. proportion of lemmas to word-types. opens a new window showing summary statistics toolbar toggles on & off a toolbar with the same buttons on it as the ones you chose when you customised 31 . popup menus undo (Ctrl+U) undoes last operation. December, 2015. Page 443</p> <p><span class="badge badge-info text-white mr-2">459</span> 444 Reference unjoin 270 entries. unjoins any entries that have been joined, e.g. lemmatised view source text 379 and highlights any words currently selected in the list. shows the source text Microsoft Excel or Word™ , save formatted data for Excel or Word. wordlist 249 within KeyWords, makes a word list using the current data. zap (Ctrl+Z) 129 zaps any deleted entries. 31 439 see also: Keyboard Shortcuts , Customising popup menus . MS Word documents 11.29 Inside a file there is a lot of extra coding apart from the plain text words. (Actually, a .docx or .doc doesn't even seem to show the ordinary text words inside it!) For example, the name of your . docx printer, the owner of the software, information about styles etc. For accurate results, WordSmith needs to use clean text where these have been removed. converting your .DOC or .DOCX files 365 or files, is to convert using the Text Converter . The easiest method, for multiple .doc .docx Alternatively you can do it in Word .doc into plain text in Word can be done thus: To convert a or .docx File | Save As | Plain text: Chose then choose Windows (1-byte per character) December, 2015. Page 444</p> <p><span class="badge badge-info text-white mr-2">460</span> 445 WordSmith Tools Manual or Other encoding -- Unicode (2-bytes): December, 2015. Page 445</p> <p><span class="badge badge-info text-white mr-2">461</span> 446 Reference 11.30 never used WordSmith before For users who are starting out with WordSmith for the first time, the whole process can seem complex. (After all, the first time you used word-processing software that seemed tricky -- but you already knew what a text is and how to write one...) So a small text file accompanies the WordSmith installation, and if WordSmith thinks you have never used it before, it will automatically choose that text file for you to start using Concord, WordList etc. WordSmith's method of knowing that you are a new user is 101 ? 1) have any concordances or wordlists been saved and 50 2) has no set of favourite text files been saved for easy retrieval? numbers 11.31 124 Depending on Language and Text Settings , you might wish to include or exclude numbers from word lists. plot dispersion value 11.32 The point of it A dispersion value is the degree to which a set of values are uniformly spread. Think of rainfall in the UK -- generally fairly uniformly spread throughout the year. Compare with countries which have a rainy season. are distributed In linguistic terms, one might wish to know how the occurrences of a word like sk ull in Hamlet, and WordSmith has shown this in plot form since version 1. The dispersion value statistic gives mathematical support to this and makes comparisons easier. How it is calculated The plot dispersion calculated in KeyWords and Concord dispersion plots uses the first of the 3 417 (1998: 190-191), which he reports as having been evaluated as the formulae supplied in Oakes most reliable. 441 , it divides the plot into 8 segments for this. Like the ruler It ranges from 0 to 1, with 0.9 or 1 suggesting very uniform dispersion and 0 or 0.1suggesting 417 , 1996) "burstiness" (Katz 251 191 . , Concord dispersion plot See also: KeyWords plot December, 2015. Page 446</p> <p><span class="badge badge-info text-white mr-2">462</span> 447 WordSmith Tools Manual 11.33 RAM availability The more RAM (chip memory) you have in your computer, the faster it will run and the more it can store. As it is working, each program needs to store results in memory. A word list of over 80,000 entries, representing over 4 million words of text, will take up roughly 3 Megabytes of memory. (In Finnish it would be much more.) When memory is low, Windows will attempt to find room by putting some results in temporary storage on your hard disk. If this happens, you'll probably hear a lot of clicking as it puts data onto the disk and then reads it off again. You will probably hear some clicking anyway as most of the programs in access your original texts from the WordSmith Tools hard disk, but a constant barrage of thrashing shows you've reached your machine's natural limits. You can find out how much storage you have available even in the middle of a process, by pressing menu of each program). The first line states the RAM F9 (the About option in the main Help availability. The other figures supplied concern Windows system resources: they should not be a 101 save results problem but if they do go below about 20% you should , exit Windows and re- enter. Theoretically, word lists and key word lists can contain up to 2,147,483,647 separate entries. Each of these words can have appeared in your texts up to 2,147,483,647 times. (This strange number 2,147,483,647, half of 2 to the power 32, is the largest signed integer which can be stored in 32 bits and is also called 2 Gigabytes.) You are not likely to reach this theoretical limit: for the item to the have occurred 2,147,483,647 times in your texts, you would have processed about 30 thousand million words (1 CD-ROM, containing only plain text, can hold about 100 million words so this number represents some 300 CD-ROMs.) You would have run out of RAM long before this. If you have a Gigabyte of RAM or more you should be able to have a copy of a word-list based on millions of words of text, and at the same time have a powerful word-processor and a text file in memory. 448 speed See also: 11.34 reference corpus Reference Corpus A corpus of text which you use for comparative purposes. For example, you might want to compare a given piece of text with the British National Corpus , a collection of 100 million words. Useful when 229 computing key words . 113 4 for KeyWords and Concord to you can set your reference corpus word list In the Controller 318 258 make use of. (That is, a word list created using the WordList tool.) 11.35 restore last file By default, the last word list, concordance or key words listing that you saved or retrieved will be . If the last Tool used is Concord WordSmith Tools , a list of automatically restored on entry to your 10 most recent search-words will be saved too. wordsmith6.ini This feature can be turned off temporarily via a menu option or permanently in (in your Documents\wsmith6 folder). December, 2015. Page 447</p> <p><span class="badge badge-info text-white mr-2">463</span> 448 Reference single words v. clusters 11.36 The point of it... Clusters are words which are found repeatedly together in each others' company, in sequence. They represent a tighter relationship than collocates, more like multi-word units or groups or clusters because phrases. (I call them and phrases already have uses in grammar and groups because simply being found together in software doesn't guarantee they are true multi-word units .) 417 calls clusters, if repeated the right ways, "lexical bundles". Biber Language is phrasal and textual. It is not helpful to see it as a matter of selecting a word to fill a grammatical "slot" as implied by structural theories. Words keep company: the extreme example is idiom where they're bound tightly to each other, but all words have a tendency to cluster together with some others. These clustering relations may involve colligation (e.g. the relationship 179 and on ), collocation , and semantic prosody (the tendency for cause to between depend etc.). accident, trouble, come with negative effects such as 278 WordSmith Tools gives you two opportunities for identifying word clusters, in WordList and 175 Concord . They use different methods. Concord only processes concordance lines, while WordList processes whole texts. How they are computed ... Suppose your text begins like this: Once upon a time, there was a beautiful princess. She snored. But the prince didn't. If you've chosen 2-word clusters, the text will be split up as follows: Once upon upon a a time " because of the comma) (note time there not " there was (etc.) With a three-word cluster setting, it would send Once upon a upon a time there was a was a beautiful a beautiful princess But the prince the prince didn't (etc.) That is, each n-word cluster will be stored, if it reaches n words in length, up to a punctuation (It seems reasonable to suppose that a cluster does not cross clause boundary , marked by ;,.!? boundaries and these punctuation symbols help mark clause boundaries, but there is a Concord 310 163 or a WordList setting for this to give you choice.) setting 394 See also: concgrams . speed 11.37 networks If you're working on a network, WordSmith will be s-l-o-w if it has to read and write results across December, 2015. Page 448</p> <p><span class="badge badge-info text-white mr-2">464</span> 449 WordSmith Tools Manual the network. It's much faster to do your work locally on a C:\ D:\ drive and then copy any or useful results over to network storage later if required. and generally To make a word-list on 4.2 million words used to take about 20 minutes on a 1993 vintage 486-33 447 . The sorting procedure at the end of the processing took about 30 seconds. A with 8Mb of RAM 200Mz Pentium with 64MB of RAM handled over 1.7 million words per minute. On a 100Mz Pentium with 32Mb of RAM this whole process took about 3 and a half minutes, working at over a million words a minute. When concordancing, tests on the same Pentium 100, using one 55MB text file of 9.3 million words, and a quad-speed CD-ROM drive, showed search-word source speed CD-ROM quickly 6 million words per minute quickly 12 million wpm hard disk CD-ROM the 900,000 wpm the hard disk 1 million wpm CD-ROM thez 6 million wpm thez hard disk 16 million wpm Tests using a set of text files ranging from 20K down to 4K, using quick ly as the search-word, gave speeds of 2 million wpm rising with the longer files to 4 million wpm. Making a word list on the same set of files gave an average speed of 800,000 wpm. On the 55MB text file the speed was around 1.35 million wpm. was the These data suggest that factors which slow concordancing down are, in order, word rarity ( much slower than quick ly or the non-existent ), text file size (very small files of only 500 words thez or so (3K) will be processed about three times as slowly as big ones) and disk speed (the outdated quad speed CD-ROM being roughly half the speed of the 12ms hard disk). When Concord finds a word it has to store the concordance line and collocates and show it (so that you can decide to 123 suspend any further processing if you don't like the results or have enough already). This is a major factor slowing down the processing. Second, reading a file calls on the computer's file management system, which is quite slow in loading it, in comparison with Concord actually searching through it. Third, disk speeds are quite varied, floppy disks being much the worst for speed. If processing seems excessively slow, close down as many programs as possible and run again. Or install more RAM. Get advice about setting Windows to run efficiently WordSmith Tools (virtual memory, disk caches, etc.) Use a large fast hard drive. You can run other software while the programs are computing, but they will take up a lot of the 97 processor's time. Shoot-em-up games may run too jerkily, but printing a document at the same time should be fine. 11.38 status bar The bar at the bottom of a window, which allows you to pull the whole window bigger or smaller, and which also shows a series of panels with information on the current data. The status bar can usually be revealed or hidden using a main menu option. You can right-click on the panel to bring up a 428 popup menu offering choice between Edit, Type and Set . December, 2015. Page 449</p> <p><span class="badge badge-info text-white mr-2">465</span> 450 Reference 11.39 tools for pattern-spotting Tools are needed in almost every human endeavour, from making pottery to predicting the weather. Computer tools are useful because they enable certain actions to be performed easily, and this facility means that it becomes possible to do more complex jobs. It becomes possible to gain insights because when you can try an idea out quickly and easily, you can experiment, and from experimentation comes insight. Also, re-casting a set of data in a new form enables the human being to spot patterns. This is ironic. The computer is an awful device for recognising patterns. It is good at addition, sorting, etc. It has a memory but it does not know or understand anything, and for a computer to recognise printed characters, never mind reading hand-writing, is a major accomplishment. Nevertheless, the computer is a good device for helping humans to spot patterns and trends. That is why it is important to see computer tools such as these in WordSmith Tools in their true light. A tool helps you to do your job, it doesn't do your job for you. Tool versus Product Some software is designed as a product. A game is self-contained, so is an electronic dictionary. A word-processor, spreadsheet or database, on the other hand, is a tool because it goes beyond its own borders: you use it to achieve something which the manufacturers could not possibly anticipate. WordSmith Tools, as the name states, are not products but tools. You can use them to investigate many kinds of pattern in virtually any texts written in a good range of different languages 81 . Insight through Transformation No, this is not a religious claim! The claim I am making is psychological. It is through changing the shape of data, reducing it and then re-casting it in a different format, that the human capacity for noticing patterns comes to the fore. The computer cannot "notice" at all (if you input 2 into a calculator and then keep asking it to double it, it will not notice what you're up to and begin to do it automatically!). Human beings are good at noticing, and particularly good at noticing visual patterns. By transforming a text into a list, or by plotting keywords in terms of where they crop up in their source texts, the human user will tend to see a pattern. Indeed we cannot help it. Sometimes we see patterns where none was intended (e.g. in a cloud). There can be no guarantee that the pattern is "really there": it's all in the mind of the beholder. WordSmith Tools are intended to help this process of pattern-spotting, which leads to insight. The tools in this kit are intended therefore to help you gain your own insights on your own data from your own texts. Types of Tool All tools take up positions on two scales: the scale of specialisation and the scale of permanence. general-purpose ----------------- specialised general-purpose The spade is a digging tool which makes cutting and lifting soil easier than it otherwise would be. But it can also be used for shovelling sand or clearing snow. A sewing machine can be used to make curtains or handkerchiefs. A word-processor is general-purpose. December, 2015. Page 450</p> <p><span class="badge badge-info text-white mr-2">466</span> 451 WordSmith Tools Manual specialised A thimble is dedicated to the purpose of protecting the fingers when sewing and is rarely used for anything else. An overlock device is dedicated to sewing button-holes and hems: it's better at that job than a sewing machine but its applications are specialised. A spell-checker within a word-processor is fairly specialised. temporary ----------------- permanent temporary The branch a gorilla uses to pull down fruit is a temporary tool. After use it reverts to being a spare piece of tree. A plank used as a tool for smoothing concrete is similar. It doesn't get labelled as a tool though it is used as one. This kind of makeshift tool is called "quebra-galho", literally branch-breaker, in Brazilian Portuguese. permanent A chisel is manufactured, catalogued and sold as a permanent tool. It has a formal label in our vocabulary. Once bought, it takes up storage room and needs to be kept in good condition. The WordSmith Tools in this kit originated from temporary tools and have become permanent. They are intended to be general-purpose tools: this is the Swiss Army knife for lexis. They won't cut your fingers but you do need to know how to use them. 416 191 128 , Acknowledgements see also : Word Clouds , Dispersion Plots 11.40 version information This help file is for the current version of WordSmith Tools. The version of WordSmith Tools is displayed in the About option (F9) which also shows your 447 available amount of memory registered name and the . If you have a demonstration version this will be stated immediately below your name. Check the date in this box, which will tell you how up-to-date your current version is. As suggestions are incorporated, improved versions are made available for downloading. Keep a copy of your registration code for updated versions. You can click on the WordSmith graphic in the About box to see your current code. December, 2015. Page 451</p> <p><span class="badge badge-info text-white mr-2">467</span> 452 Reference 427 425 452 See also: 32-bit Version Differences . , Demonstration Version , Contact Addresses Version 3 improvements 11.40.1 After the earlier 16-bit versions of the 1990s, WordSmith brought in lots of changes "under the hood". · long file names 131 198 handling including Tag Concordancing · better tag and entity converter for previous data · 453 zip file handling · 102 easier exporting of data to Microsoft Word and Excel · 81 Unicode text handling, allowing more languages to be processed · 67 as it comes in, e.g. for language-specific lemmatisation · possibility of altering the data the old limitations of 16,000 lines of data went. (The theoretical limit for a list of data is over · 134 million lines.) 425 4 Contact Addresses See also: What's New in the current version . , December, 2015. Page 452</p> <p><span class="badge badge-info text-white mr-2">468</span> 453 WordSmith Tools Manual 11.41 zip files Zip files are files which have been compressed in a standard way. WordSmith can now read and write to .zip files. The point of it... Apart from the obvious advantage of your files being considerably smaller than the originals were, the other advantage is that less disk space gets wasted like this: any text file, even a short one containing on the word "hello", will take up on your disk something like 4,000 bytes or maybe up to 32,000 depending on your system. If you have 100 short files, you would be losing many thousands of bytes of space. If you "zip" 100 short files they may fit into just 1 such space. Zip files are used a lot in Internet transmissions because of these advantages. If you have a lot of word lists to store, it will be much more efficient to store them in one .zip file. The "cost" of zipping is a) the very small amount of time this takes, b) the resulting .zip file can only be read by software which understands the standard format. There are numerous zip programs PKZip ™ and Winzip ™. If you zip up a word list, these programs can unzip on the market, including can first unzip it and then it but won't be able to do anything with the finished list. WordSmith show it to you. How to do it... Where you see an option to create a zip file, this can be checked, and the results will be stored where you choose but in zipped form with the .zip ending. If you choose to open a zipped word list, concordance, text file, etc. and it contains more than one file within it, you will get a chance to decide which file(s) within it to open up. Otherwise the process processing. will happen in the background and will not affect your normal WordSmith December, 2015. Page 453</p> <p><span class="badge badge-info text-white mr-2">469</span> WordSmith Tools Manual Troubleshooting Section XII</p> <p><span class="badge badge-info text-white mr-2">470</span> 455 WordSmith Tools Manual 12 Troubleshooting 12.1 list of FAQs 31 . See also: logging These are the Frequently Asked Questions. 461 There's a much longer list of explanations under Error Messages . 455 Can't process apostrophes 456 Is this Russian, Greek or English? strange symbols in display 456 It crashed 458 It doesn't even start! 458 It takes ages! 457 Keys don't respond 456 Line beyond demo limit 456 Mismatch between Concord and WordList results 455 No tags visible in concordance 458 Printing problem 457 Text is unreadable because of the colours 455 Too much or too little space between columns 459 Wordlist out of order 458 Won't slice pineapples apostrophes not found 12.2 Apostrophes not processed can't find Concord If your original text files were saved using Microsoft Word™, you may find apostrophes or quotation marks in them! This is because Word can be set to produce "smart" symbols. The ordinary apostrophe or inverted comma in this case will be replaced by a curly one, curling left or right depending on its position on the left or right of a word. These smart symbols are not the same as straight apostrophes or double quote symbols. Solution: select the symbol in the character set in the Controller, then paste when entering your 159 355 , or else replace them in your text files using Text Converter search word . 113 See also: settings 12.3 column spacing column spacing is wrong 87 layout button. You can alter this by clicking on the 12.4 Concord tags problem no tags visible in concordance in If you can't see any tags after asking for Nearest Tag Tags , it is probably because the Concord 132 has the same format. For example, if , any tags such as <*> to Ignore Text to Ignore has 141 . <title> , <quote> , etc. will be cut out of the concordance unless you specify them in a tag file Solution: specify the tag file and run the concordance again. December, 2015. Page 455</p> <p><span class="badge badge-info text-white mr-2">471</span> 456 Troubleshooting 12.5 Concord/WordList mismatch Concord/WordList mismatch 448 WordList finds a certain number of but Concord finds a If occurrences of a (word list) cluster different number, this is because the procedures are different. WordList proceeds word by word, Concord ignoring punctuation (except for hyphens and apostrophes). When searches for a 175 (concordance) cluster it will (by default) take punctuation into account: you can change that in 222 if you wish. the settings 12.6 crashed it crashed! Solution: quit and enter again. If that fails, quit Windows and try again. WordSmith Tools 32 . The idea of Logging is to find out what is causing a crash. It is designed for when Or try logging WS gets only part of the way through some process. As it proceeds, it keeps adding messages to the log about what it has found & done. When it crashes, it can't add any more messages! So if you examine the log you can see where it was up to. At that point, you may see a text file name that it opened up. Examine that text, you might be able to see something strange about it, eg. it has got corrupted. 12.7 demo limit demo limit reached You may have just downloaded, but you haven't yet supplied your registration details. To do this, Settings | Register in the menu. go to the main WordSmith Tools window, and choose If you haven't got the registration code, contact Lexical Analysis Software ([email protected]). 427 and a difference between a full version is: with the latter you only The demonstration version can see or print all the data, with the former you'll be able to see only about 25 lines of output. funny symbols 12.8 weird symbols funny symbols when using WordSmith Tools Notepad . Do they contain lots of strange symbols? 1. Check your text files. Look at them in These may be hidden codes used by your usual word-processor. Solution: open them in your , in plain text form Save As usual word-processor and , with a new name at, sometimes called "Text . In Word 2003 the option looks like this: .txt Only" or and then choose Unicode: December, 2015. Page 456</p> <p><span class="badge badge-info text-white mr-2">472</span> 457 WordSmith Tools Manual 2. Choose Texts , select the text file(s), right-click and View . Does it contain strange symbols? 365 to clean up and convert and your text files to Unicode. 3. Use Text Converter Greek, Russian, etc. 4. If the text is in Russian, Greek, etc. you will need an appropriate font, obtainable from your Windows cd or via the Microsoft website. 78 5. If you have several lists open which use or character sets, and yo different u change Font 124 Text Characteristics , the lists will all be updated to show the current font and character set, unless you first minimize any window which would be affected. funny symbols when reading WordSmith data in another application 97 101 102 Save or Save As and Saves as text WordSmith Tools to a file. "Save" by printing can s form . Thi WordSmith and "Save As" will store the file in a format for re-use by at is not suitable for reading into a word processor. The idea is simply for you to store your work so that you can return to it another day. "Save as Text", on the other hand, means saving as plain text, by "printing" to a file. This function is useful if you don't want to print to paper from WordSmith but instead take the data into a such as Microsoft Word. It spreadsheet, or word processor is usually quicker to copy the selected 422 text into the . clipboard illegible colours 12.9 text unreadable because of colours , choose . You can now set the colours which suit your computer Settings Colours Solution: in monitor. Monochrome settings are available. 12.10 keys don't respond Keys don't respond If a key press does nothing, it is probably because the wrong window, or the wrong column in the window, has the focus. As you know, Windows is designed to let users open up a number of programs at once on the same screen, so each window will respond to different key-press combinations. You can see which window has the focus because its caption is coloured differently from all the others. The solution is to click within the appropriate window/column, then press the key you wanted. December, 2015. Page 457</p> <p><span class="badge badge-info text-white mr-2">473</span> 458 Troubleshooting pineapple-slicing 12.11 won't slice a pineapple " Propose to any Englishman any principle, or any instrument, however admirable, and you will observe that the whole effort of the English mind is directed to find a difficulty, a defect, or an impossibility in it. If you speak to him of a machine for peeling a potato, he will pronounce it impossible: if you peel a potato with it before his eyes, he will declare it useless, because it will not slice a pineapple. " Charles Babbage, 1852. (Babbage was the father of computing, a 19th Century inventor who designed a mechanical computer, a mass of brass levers and cog-wheels. But in order to make it, he needed much greater accuracy than existing technology provided, and had all sorts of problems, technical and financial. He solved most of the former but not the latter, and died before he was able to see his Difference Engine working. The proof that his design was correct was shown later, when working versions were made. The difficulties he encountered in getting support from his government weren't exclusively English.) 12.12 printer didn't print printing problem If your printing comes out with one or more columns printed OK but others blank, you may have pulled your columns too wide for the paper. WordSmith uses information about your printer's defaults to compute what will and will not fit on the current paper. If you can change the printer settings to landscape that will give more space. 12.13 too slow It takes ages If you're processing a lot of text and you have an ancient PC with little memory and a hard disk that Noah bought from a man in the market for a rainy day, it might take ages. You'll hear a lot of 447 clicks coming from the hard disk is lo w. Solution: get a faster computer, by when memory installing more memory which makes a big difference), by defragmenting your hard drive, by using a disk cache, or by adjusting virtual memory settings. If you're running WordSmith Tools on a network, check with the network administrator whether performance is significantly degraded because of network access. Solution 2: quit all programs you don't need. That can restore a lot of system memory. Solution 3: quit Windows and start again. That can restore a lot of system memory. Solution 4: save and read from the local hard disk (C: or D:), not the network. 12.14 won't start it doesn't even start Yikes! December, 2015. Page 458</p> <p><span class="badge badge-info text-white mr-2">474</span> 459 WordSmith Tools Manual 12.15 word list out of order word-list out of order Words are sorted according to Microsoft routines which depend on the language. If you process Spanish but leave the Language settings to "English", you will get results which are not in correct Spanish order, (e.g. ). LL will come just before LM 81 Solution: choose your language and re-compute the word-list . December, 2015. Page 459</p> <p><span class="badge badge-info text-white mr-2">475</span> WordSmith Tools Manual Error Messages Section XIII</p> <p><span class="badge badge-info text-white mr-2">476</span> 461 WordSmith Tools Manual 13 Error Messages list of error messages 13.1 List of Error Messages 455 See also: . Troubleshooting 463 Can only save WORDS as ASCII 463 Can't call other Tool 463 Can't make folder as that's an existing filename 463 Can't merge list 463 Can't read file 464 Character set reset to <x> to suit <language> 464 Concordance file is faulty 464 Concordance stop list file not found 464 Conversion file not found 465 Destination folder not found 465 Disk problem: File not saved 465 Dispersions go with concordances 465 Drive not valid 465 Failed to access Internet 465 Failed to create new folder name 466 File access denied 466 File contains none of the tags specified 466 File not found 467 Filenames must differ! 467 Full drive:\folder name needed 467 function not working properly yet 462 INI file not found 467 Invalid Concordance file 468 Invalid file name 468 Invalid Keywords Database file 468 Invalid Keywords file 468 Invalid Wordlist Comparison file 468 Invalid Wordlist file 468 Joining limit reached: join & try again 469 Key words file is faulty 469 Keywords Database file is faulty 469 Limit of 500 file-based search-words reached 469 Links between Tools disrupted 469 Match list details not specified 469 Must be a number 470 Network registration running elsewhere or vice-versa 470 No access to text file: in use elsewhere? 470 No associates found 470 No clumps identified 470 No clusters found 470 No collocates found 471 No concordance entries found 471 No concordance stop list words 471 No deleted lines to Zap 471 No entries in Keywords Database 471 No Key Words found December, 2015. Page 461</p> <p><span class="badge badge-info text-white mr-2">477</span> 462 Error Messages 472 No key words to plot 472 No keyword stop list words 472 No lemma list words 472 No match list words 472 No room for computed variable 472 No statistics available 472 No stop list words 472 No such file(s) found 473 No tag list words 473 Not a valid number 473 No wordlists selected 474 Only X% of reference corpus words found 474 Original text file needed but not found 475 Registration string is not correct 474 Registration string must be 20 letters long 475 Short of Memory! 475 Source Folder file(s) not found 475 Stop list file not found 475 Stop list file not read 475 Tag file not found 475 Tag list file not read 476 This function is not yet ready! 476 This is a demo version 476 This program needs Windows 95 or greater 476 To stop getting this annoying message, Update from Demo in setup.exe 476 Too many ignores (50 limit) 476 Too many sentences (8000 limit) 476 Two files needed 476 Truncating at xx words -- tag list file has more! 476 Unable to merge Keywords Databases 476 Why did my search fail? 477 Word list file not found 477 Wordlist comparison file is faulty 477 Word-list file is faulty 477 WordSmith Tools has expired: get another 477 WordSmith Tools already running 477 WordSmith version mis-match 477 xx days left .ini file not found 13.2 .ini file not found WordSmith looks for the wordsmith6.ini file which holds your current defaults On starting up, 113 . If you've removed or renamed it, restore it. This file should be in a sub-folder of your Documents folder called \wsmith6. administrator rights 13.3 administrator rights If you see this error message it's because you need Administrator rights to register WordSmith. Try searching for "Run as Administrator" or this link . December, 2015. Page 462</p> <p><span class="badge badge-info text-white mr-2">478</span> 463 WordSmith Tools Manual base list error 13.4 base list error WordSmith is trying to access an word or concordance line above or below the top or bottom of the data computed. This is a bug. can only save words as ASCII 13.5 Can only save WORDS as Plain Text WordSmith Tools can't save graphics as a text file. If you get this error message, you can only 422 clipboard and pasting it into your word-processor. save this type of data by copying to the can't call other tool 13.6 Can't call other Tool 211 Inter-Tool communication has got disrupted. Save your work, first. Then, if necessary, close down WordSmith Tools altogether, then start the main wordsmith6.exe program again. 13.7 can't make folder as that's an existing filename Can't make folder as that's an existing filename file called C:\TEMP\FRED, you can't make a sub-folder of C:\TEMP called If you already have a FRED. Choose a new name. 13.8 can't compute key words as languages differ Can't compute key words as languages differ Key words can only be computed if both the text file and the reference corpus are in the same primary language. You can compute KWs using 2 different varieties of English or 2 different varieties of Spanish, but not between English and French. 13.9 can't merge list with itself! Can't merge list with itself You can only merge 1 word list or key word database with 1 other at a time. Select (by clicking while holding down the Control key) 2 file-names in the list of files. can't read file 13.10 Can't read file If this happens when starting up WordSmith Tools, there is probably a component file missing. One 4 example is sayings.txt, which holds sayings that appear in the main Controller window. If you've deleted it, I suggest you use notepad to start a new sayings.txt and put one blank line in it. If you get this message at another time, something has gone wrong with a disk reading operation. The file you're trying to read in may be corrupted. This happens easily if you often handle very large December, 2015. Page 463</p> <p><span class="badge badge-info text-white mr-2">479</span> 464 Error Messages files. See your Windows manual for help on fragmentation. character set reset to <x> to suit <language> 13.11 Character set reset to <x> to suit <language> 81 419 than Prior to version 2.00.07, WordSmith Tools handled fewer character sets and languages it does now. Accordingly, data saved in the format used before that version may not "know" what language it was based on. If you get this message when opening up an old WordSmith data file, it's because WordSmith doesn't know what language it derived from. Through gross linguistic imperialism, it will by default assume that the language is English! If the data are okay, just click the save button so that next time it will "know" which language it's 4 based on. If not, reset the language to the one you want in the Controller , Language Settings | Text, then re-save the list. 13.12 concordance file is faulty Concordance file is faulty has its own default filename extension WordSmith Tools Each type of file created by .CNC, .LST ) and its own internal structure. If you have another file with the same extension (e.g. produced by another program, this will not be compatible. It would not be sensible to rename a has detected that the file you're calling up wasn't .CNC file to .TXT, or vice-versa! WordSmith . Concord produced by the current version of concordance stop list file not found 13.13 Concordance stop list file not found , remember to include the full You typed in the name of a non-existent file. If typing in a file name drive and folder as well as the file name itself. confirmation messages: okay to re-read 13.14 Okay to re-read? A confirmation message. To proceed, Viewer & Aligner will now re-read the disk file. This will affect any alterations you've already made to the display. You may wish to save first and then try again later. Also, Viewer & Aligner will try to read the whole text file. If you have a very big file on a slow CD- ROM drive, this will take some time. 13.15 conversion file not found Conversion file not found You typed in the name of a non-existent file. If typing in a , remember to include the full file name drive and folder as well as the file name itself. December, 2015. Page 464</p> <p><span class="badge badge-info text-white mr-2">480</span> 465 WordSmith Tools Manual 13.16 destination folder not found Destination folder not found WordSmith couldn't find that folder; perhaps it's mis-spelt. 13.17 disk problem -- file not saved Disk problem: File not saved Something has gone wrong with a disk writing operation. Perhaps there's not enough room on the drive. If so, delete some files on that drive. 13.18 dispersions go with concordances Dispersions go with concordances 211 They can't be saved separately. drive not valid 13.19 Drive not valid WordSmith is unable to access this drive. This could happen if you attempt to access a disk drive which doesn't exist, e.g. drive P: where your drives include A:, C:, D: and E:. 13.20 failed to access Internet Failed to access Internet This function relies on a) your having an Internet browser on your computer, b) your system "associating" an Internet URL ending .htm with that browser. 13.21 failed to create new folder name Failed to create new folder or file-name A folder and a file cannot have the same name. If you already have a file called C:\TEMP\FRED , you can't make a FRED of C:\TEMP called sub-folder . Choose a new name. Or you don't have rights to create files in that folder. Or something went wrong while WordSmith was trying to write a file, for example the disk was full up. 13.22 failed to read file Failed to Read This may have happened a) because you included a text file which happens to be empty (zero size), or b) because your disk filing system has got screwed up, which is especially likely to occur if you often use large files in your word processor (in which do a disk cleanup) or c) because you tried to use the wrong kind of file for the job (for example the KeyWords procedure won't work if you choose text files as your word-lists). December, 2015. Page 465</p> <p><span class="badge badge-info text-white mr-2">481</span> 466 Error Messages 13.23 failed to save file Failed to Save Maybe because you had the same file open in another program or another instance of the Tool you're running. If so, close it and try again. Or because the folder you're saving to is a read-only folder on a network, or because the disk is full, or because your disk filing system has got screwed up. This last problem is quite common, actually, and is especially likely to occur if you often use large files in your word processor. In that case run Programs | Accessories | System Tools | Disk Defragmenter . 211 If you're working on a network, you will be able to save on certain drives and folders but not others; the solution is to try again on a memory stick or a hard disk drive which you do have the right to save to. 13.24 file access denied File Access Denied Maybe the file you want is already in use by another program. You'll find most word-processors label any text files open in them as "in use", and won't let other programs access them even just to read them. Close the text file down in your word processor. 13.25 file contains none of the tags specified File contains none of the tags specified You specified tags, but none of them were found. file has "holes" 13.26 File has "holes" Text files are supposed to contain only characters, punctuation, numbers, etc. without any unrecognised ones such as character(0). The problem could have arisen because it was transferred from one system to another, part of the disk is corrupted, or else maybe the file contains 473 . unrecognised graphics (or else it is not a plain text file but e.g. a Word document) You can solve this problem by converting the text using the Text Converter. If it is a plain text with holes these will be replaced by spaces. You can find texts with holes using the File Utilities. file not found 13.27 File not found 474 This message, like Original Text not found , can appear when WordSmith needs to access the original source text used when a list was created, but cannot find it. Have you deleted or moved it? If ) of this the file is still available, you may be able to edit the file names in the file name window ( list. Or the message may come after you've supplied the file name yourself. You may have mis-typed it. If typing in a file name, remember to include the full drive and folder as well as the file name itself. December, 2015. Page 466</p> <p><span class="badge badge-info text-white mr-2">482</span> 467 WordSmith Tools Manual filenames must differ! 13.28 Filenames must differ You can't compare a file with itself. folder is read-only 13.29 folder is read-only For some purposes, WordSmith needs to save files e.g. lists of results you have made so that you can get at recent files again. To do this it needs a place where your network or operating system lets you save. Usually \wsmith6 is fine, but in some institutional settings the drive or folder may be "read-only". If you see this message, choose Folder Settings and select there a folder where you can write as well as read. for use on X machine only 13.30 For use on pc named XXX only The software was registered for use on another PC. If you get this message, please re-install as appropriate. 13.31 form incomplete Form incomplete You tried to close a form where one or more of the blanks needed to be filled in before WordSmith could proceed. 13.32 full drive & folder name needed Full drive:\folder name needed , remember to include the full drive and folder as well as the file name file name When typing in a itself. 13.33 function not working properly yet function not working properly yet This is a function under development, still not fully implemented. 13.34 invalid concordance file Invalid Concordance file Each type of file created by WordSmith Tools has its own default filename extension (e.g. .CNC, .LST ) and its own internal structure. If you have another file with the same extension produced by another program, this will not be compatible. It would not be sensible to rename a .CNC file to .TXT, or vice-versa! WordSmith has detected that the file you're calling up wasn't . produced by the current version of Concord December, 2015. Page 467</p> <p><span class="badge badge-info text-white mr-2">483</span> 468 Error Messages invalid file name 13.35 Invalid file name may not contain spaces or certain symbols such as ? and * File names . 13.36 invalid KeyWords database file Invalid Keywords Database file Each type of file created by WordSmith Tools has its own default filename extension (e.g. .KWS, .KDB ) and its own internal structure. If you have another file with the same extension produced by another program, this will not be compatible. It would not be sensible to rename a WordSmith has detected that the file you're calling up wasn't .KDB file to .TXT, or vice-versa! produced for a database by the current version of KeyWords . 13.37 invalid KeyWords calculation Invalid Keywords calculation For KeyWords to calculate the key-words in a text file by comparing it with a reference corpus, both must be in the same language, both must be sorted in the same way (alphabetical order, ascending) and they should both be in the same format (Unicode or single-byte). If you see this message you are trying to compute KWs without meeting these criteria. Solution: open each word-list and check to see it is OK and that it is sorted alphabetically in the same way (in the Alphabetical view, click the top bar to re-sort in ascending alphabetical order), then save it. Check they have both been made with the same language & format settings and if necessary re-compute one or both of them. invalid WordList comparison file 13.38 Invalid Wordlist Comparison file has its own default filename extension WordSmith Tools Each type of file created by (e.g. .LST, .CNC ) and its own internal structure. If you have another file with the same extension produced by another program, this will not be compatible. It would not be sensible to rename a WordSmith has detected that the file you're calling up wasn't .CNC file to .TXT, or vice-versa! WordList . produced as a comparison file by 13.39 invalid WordList file Invalid Wordlist file Each type of file created by WordSmith Tools has its own default filename extension (e.g. ) and its own internal structure. If you have another file with the same extension .LST, .CNC produced by another program, this will not be compatible. It would not be sensible to rename a .LST file to .TXT, or vice-versa! WordSmith has detected that the file you're calling up wasn't produced by the current version of WordList . 13.40 joining limit reached Joining limit reached: join & try again 270 Only a certain number of words can be lemmatised in one operation. If you reach the limit and get this message, 1. lemmatise by pressing F4, December, 2015. Page 468</p> <p><span class="badge badge-info text-white mr-2">484</span> 469 WordSmith Tools Manual 2. place the highlight on the head entry again 3. press F5 and carry on lemmatising by pressing F5 on each entry you wish to attach to the head entry 4. when you've done, press F4 to join them up. KeyWords database file is faulty 13.41 Keywords Database file is faulty has its own default filename extension Each type of file created by WordSmith Tools ) and its own internal structure. If you have another file with the same extension KDB, .KWS (e.g. . produced by another program, this will not be compatible. It would not be sensible to rename a WordSmith has detected that the file you're calling up wasn't .KDB file to .TXT, or vice-versa! KeyWords produced for a database of keywords, by the current version of . KeyWords file is faulty 13.42 Key words file is faulty Each type of file created by WordSmith Tools has its own default filename extension (e.g. ) and its own internal structure. If you have another file with the same extension .KWS, .KDB produced by another program, this will not be compatible. It would not be sensible to rename a .KWS file to .TXT, or vice-versa! WordSmith has detected that the file you're calling up wasn't KeyWords produced by the current version of . 13.43 limit of file-based search-words reached Limit of search-words reached 161 No more than 15 search-words can be processed at once, unless you use a file of search words to tell Concord to do them in a batch, where the limit is 500. 13.44 links between Tools disrupted Links between Tools disrupted 4 WordSmith Tools Controller or an individual Tool has tried to call another Tool and failed. There may have been a fault in another program you're running or a shortage of memory. As inter-tool 438 are vital in this suite, you should exit WordSmith and re-enter. communication links 13.45 match list details not specified Match list details not specified 92 button but then failed to choose a valid match list file or else to type You pressed the Match List in a template for filtering. Try again. must be a number 13.46 Must be a number L and 1 , and O You typed in something other than a number. Be especially careful with lower-case (the letter) instead of 0 (the number). December, 2015. Page 469</p> <p><span class="badge badge-info text-white mr-2">485</span> 470 Error Messages mutual information incompatible 13.47 Mutual information list is incompatible A mutual information list derives from an index file, and knows which index file it derives from when computed. Normally when it opens up, it opens up the corresponding index file too. If that index If the file is not found on your PC or has been renamed, you will see this message. The mutual information can still be accessed but a) what you see in terms of Frequency and Alphabetical lists refers to a different index file, and b) it will not be possible to get concordances directly from the listing . 13.48 network registration used elsewhere Network registration running elsewhere or vice-versa The site licence registration for use on a network is not valid for use on a stand-alone pc, and vice- versa. If you get this message, please re-install as appropriate. 13.49 no access to text file - in use elsewhere? No access to text file: in use elsewhere? The file cannot be accessed. Perhaps another application is using it. If so, close down the file in that other application and try again. no associates found 13.50 No associates found Settings | Min & Max Frequencies ) and try again. Alter settings ( 13.51 no clumps identified No clumps identified Alter settings and try again. no clusters found 13.52 No clusters found Alter the settings ( Settings | Clusters ) and try again. There were too few concordance lines to find the minimum number needed, or the cluster length was too great. 13.53 no collocates found No collocates found 4 In the Controller , alter the settings (Concord settings | Min. Frequency) and try again. There were too few concordance lines to find the minimum number needed. December, 2015. Page 470</p> <p><span class="badge badge-info text-white mr-2">486</span> 471 WordSmith Tools Manual 13.54 no concordance entries No concordance entries found If you got no concordance entries, either a) there really aren't any in your text(s), b) there's a problem with the specification of what you're seeking, or c) there's a problem with the text selection. Check how you've spelt the search-word and context word. If you're using accented text 161 419 , check the format of your texts. If you're using a search-word file , ensure this was prepared using a plain Windows word-processor such as Notepad. 159 (* and ?) accurately? If you are looking for a question-mark, Have you specified any wildcards you may have put "?" correctly but remember that question-marks usually come at the ends of words, so you will need . *"?" Tip 159 Bung in an asterisk or two. You're more likely to find book* than book. no concordance stop list words 13.55 No concordance stop list words no deleted lines to zap 13.56 No deleted lines to Zap 129 . No harm done. zap You pressed Ctrl+Z but hadn't any deleted lines to no entries in KeyWords database 13.57 No entries in Keywords Database Alter settings and try again. no fonts available 13.58 no fonts available for language The operating system does not have a font which can show the characters for that language. You need to find and install a font. 13.59 no key words found No Key Words found 235 too p value Alter settings and try again. The minimum frequency is set too high and/or the small for any key words to be detected. For very short texts a minimum frequency of 2 may be needed. December, 2015. Page 471</p> <p><span class="badge badge-info text-white mr-2">487</span> 472 Error Messages 13.60 no key words to plot No key words to plot Had you deleted them all? 13.61 no KeyWords stop list words No keyword stop list words WordSmith either failed to read your stop-list file or it was empty. 13.62 no lemma list words No lemma match list words WordSmith either failed to read your lemma list file or it was empty. 13.63 no match list words No match list words 92 WordSmith match list file, or it was empty, or you forgot to check the either failed to read your action to be taken (one option is None ). Or you tried to match up using a list of words, or a template, when the current column has only numbers. Or else there really aren't any like those you specified! no room for computed variable 13.64 No room for computed variable There isn't enough space for the variable you're trying to compute. 13.65 no statistics available No statistics available Some types of word list created by WordSmith Tools , e.g. a word list of a key words database have words in alphabetical and frequency order but no statistics on the original text files. You WordList cannot therefore call the statistics up in . You might also see this message if the statistics file you're trying to call up is corrupted. no stop list words 13.66 No stop list words WordSmith either failed to read your stop-list file or it was empty. no such file(s) found 13.67 No such file(s) found You typed in the name of a non-existent file. If typing in a file name, remember to include the full drive and folder as well as the file name itself. December, 2015. Page 472</p> <p><span class="badge badge-info text-white mr-2">488</span> 473 WordSmith Tools Manual 13.68 no tag list words No tag list words WordSmith either failed to read your tag file or it was empty. 13.69 no word lists selected No word lists selected For to know which word lists to compare, you need to select them, by clicking on one WordSmith in each folder. If you've changed your mind, press Cancel. 13.70 not a valid number Not a valid number has just attempted to read (e.g. from Either you've just typed in, or else WordSmith Tools 113 , the file), something which is expected to be a number but wasn't. wordsmith6.ini defaults O as equivalent to the number 0 Computers will not see the capital . Or else there is a number but accompanied by some other letters or symbols, e.g. £30 . If this happens when WordSmith is starting up, check out the wordsmith6.ini file for mistakes. 13.71 not a WordSmith file The file you are trying to open is not a WordSmith Tools file. WordSmith makes files containing your results, files whose names end in .LST, .CNC, .KWS , etc. These are in WordSmith's own format 444 cannot Word .doc and cannot be opened up by Microsoft Word -- likewise a plain text file or a usually be read in by WordSmith as a data file, but only as a text file for processing. 323 See also: Converting Data from Previous Versions 13.72 not a current WordSmith file Not a Current WordSmith File The file you are trying to open was made using WordSmith but either · it's a file made using version 1-3 or it's a file made with the beta version of WordSmith and the format has had to change (sorry!) · 323 If the former, you may be able to convert it using the Converter . nothing activated 13.73 Nothing activated Some forms have choices labelled "Activated" which you can switch on and off. If they are un- WordSmith will ignore them. checked, you can still see what they would be but December, 2015. Page 473</p> <p><span class="badge badge-info text-white mr-2">489</span> 474 Error Messages Only X% of words found in reference corpus 13.74 Only X% of words found in reference corpus When WordSmith computes key words it checks to see that most of the words in your small word- list are found in the reference corpus, as would be expected. If less than 50% are found, you will get this warning. That is a bit unusual, and is supplied as a warning that for example there might be something strange about one of your two texts. If you know there is nothing strange, then you could ignore the message. If you are processing clusters you are much more likely to see this warning, however, as the chance of 3-word strings matching in the two lists is less than that of single words matching. It is up to you to decide whether there is some error in what you are doing or it is OK for many of your smaller word list's words/clusters not to be found in the reference corpus word list. It might not be so unusual if your reference corpus was very small. But if it is indeed very small, the whole procedure is not very reliable. WordSmith simply looks at the frequencies of each word form and uses basic statistics to compute how greatly they differ in frequency. Basic statistics rely on a notion of what can be expected. If the reference corpus is incredibly small, WordSmith's computation of what is to be expected isn't really very reliable. As a dumb example if you met three citizens of a country you have never visited, and all looked fat, you might suppose the people of that country to be fat in general, but the sample size is not reliable for such an expectation. The KW procedure isn't really proof of anything, incidentally. Words don't occur in texts at all randomly and all ordinary basic statistics can do in my opinion is give us food for thought. So a KW listing isn't proof of anything but it may well give good ideas as to what may prove interesting avenues for research. original text file needed but not found 13.75 Original text file(s) needed but not found 113 WordSmith needed to find the original text file which the list was based on. But it To proceed, has been moved or renamed. Or if on a network, your network connection is not mapped, or the network is down ...or else the right disk or CD-ROM is not in the drive! 13.76 printer needed WordSmith needs a printer driver to be installed, even if you never actually print anything. You don't 97 function in Concord, need to buy a printer or to switch a printer on, but the Print Preview WordList, KeyWords etc. does need to know what sort of paper size you would print to. If you get a message complaining that no printer has been installed, choose Start | Settings | Printers & Faxes and install a default printer (any printer will do) in Windows. 13.77 registration code in wrong format Registration code unexpectedly short PASTE the registration supplied into the box; only paste into the Name or Other Details boxes the details supplied. If you see this message on registering you may have a registration for a previous major version. If so, contact sales at lexically dot net with your original purchase details and you will be entitled to a December, 2015. Page 474</p> <p><span class="badge badge-info text-white mr-2">490</span> 475 WordSmith Tools Manual 50% discount on the current version. registration is not correct 13.78 Registration is not correct It doesn't match up with what's required for a full updated version! The old registration code in earlier 427 mode. versions is no longer in use. WordSmith will still run but in Demonstration Version 13.79 short of memory Short of Memory! 447 . An operation could not be completed because of shortage of RAM 13.80 source folder file(s) not found Source Folder file(s) not found file name You typed in the name of a non-existent file. If typing in a , remember to include the full drive and folder as well as the filename itself. 13.81 stop list file not found Stop list file not found You typed in the name of a non-existent file. If typing in a file name, remember to include the full drive and folder as well as the file name itself. stop list file not read 13.82 Stop list file not read Something has gone wrong with a disk reading operation. The file you're trying to read in may be corrupted. This happens easily if you often handle very large files, especially if it's a long time since Scandisk you last ran to check whether any clusters in your files have got lost. See your DOS or Windows manual for help on fragmentation. tag file not found 13.83 Tag File not found You typed in the name of a non-existent file. If typing in a file name, remember to include the full drive and folder as well as the file name itself. tag file not read 13.84 Tag list file not read Something has gone wrong with a disk reading operation. The file you're trying to read in may be corrupted. This happens easily if you often handle very large files. See your Windows manual for help on fragmentation. December, 2015. Page 475</p> <p><span class="badge badge-info text-white mr-2">491</span> 476 Error Messages 13.85 this function is not yet ready This function is not yet ready! Temporary message, for functions which are still being tested. 13.86 this is a demo version This is a demo version 427 upgrade You will probably want to to the full version. this program needs Windows XP or greater 13.87 This program needs Windows XP or better From version 4.0, this program has required operating systems for this millennium. to stop getting this message ... 13.88 427 Get an update. This is "annoyware" for the demonstration version . too many requests to ignore matching clumps 13.89 The limit is 50. Do any remaining joining manually. too many sentences 13.90 The limit is 8,000. Do the task in pieces. 13.91 truncating at xx words -- tag list file has more The tag list file has more entries than the current limit. Or else it isn't a tag list file at all! 13.92 two files needed You need to select 2 files for this procedure. Select (by clicking while holding down the Control key) 2 file-names in the list of files. 13.93 unable to merge Keywords Databases 447 to carry out the merge. Perhaps there wasn't enough RAM 13.94 why did my search fail? ) for a list of data operates on the currently highlighted The standard search function (F12 or column. If you want to search within data from another column, click in that column first. By default, a search is "whole word". Use * at either end of the word or number you're searching for if you want to find it, e.g. in any data consisting of more than one word. (The advantage of the asterisk system is that it allows you to specify either a prefix or a suffix or both, unlike the standard December, 2015. Page 476</p> <p><span class="badge badge-info text-white mr-2">492</span> 477 WordSmith Tools Manual Windows search "whole word" option.) 13.95 word list file is faulty has its own default filename extension Each type of file created by WordSmith Tools (e.g. .LST, .KWS ) and its own internal structure. If you have another file with the same extension produced by another program, this will not be compatible. It would not be sensible to rename a .CNC file to .TXT, or vice-versa! WordSmith has detected that the file you're calling up wasn't WordList produced by the current version of . 13.96 word list file not found You typed in the name of a non-existent file. If typing in a file name, remember to include the full drive and folder as well as the file name itself. WordList comparison file is faulty 13.97 WordSmith Tools Each type of file created by has its own default filename extension (e.g. . LST, .KWS ) and its own internal structure. If you have another file with the same extension produced by another program, this will not be compatible. It would not be sensible to rename a WordSmith .CNC file to .TXT, or vice-versa! has detected that the file you're calling up wasn't . produced as a comparison file by WordList 13.98 WordSmith Tools already running Don't try to start WordSmith Tools again if it's already running. Just Alt-tab back to the instance which is running. (You can, however, have several copies of each tool running at once.) 13.99 WordSmith Tools expired Message for limited period users only. Your version of WordSmith Tools has passed its validity and 425 427 . is now in demo mode. Download another from the Internet 13.100 WordSmith version mis-match 438 Since the various Tools are linked to each other, it is important to ensure that the component files are compatible with each other. If you get this message it is because one or more components is dated differently from the others. 425 . Solution: download those you need from one of the contact websites XX days left 13.101 427 Message for limited period users only. At the end of this time WordSmith will revert to demo mode. December, 2015. Page 477</p> <p><span class="badge badge-info text-white mr-2">493</span> Index 478 acknowledgements 416 add value to corpus 149 Index adding notes to data 29 adjust settings 29 Administrator rights 462 - # - Adobe .pdf to plain text 371 advanced concordance settings 163 # in clusters 278 advanced scripting 106 # symbol 318 advanced settings convert from UTF-8 31 - . - customising menus 31 deadkeys 31 force keyboard 31 .DOC files convert to .TXT in MS Word 444 Maori 31 convert to plain text using Text Converter 371 menu shortcuts 31 popup menu 31 .PDF to plain text 371 advanced settings button 80 .XLS to txt 371 alt-tab 127 .zip files 50 annotate source texts 149 - { - API 416 altering your data 67 custom .dll file 67 {CHR( conversion 362 lemmatising with custom .dll 67 apostrophe 125 - ~ - apostrophes -- curling or straight 367 Apple Mac 440 ~ operator 202 Application programming interface 416 - 2 - associate defined 243 associate word-lists and concordances with file-types 428 25 lines 456 associated entries lemmas 270 - 3 - associates 241 attach date to text file 48 32-bit version 452 auto sentence handling 146 auto-joining lemmas 273 - 5 - automated file-based concordancing 161 automated processing 106 500 key words 254 - B - - A - Babbage 458 about option 437 batch choosing 232 accents 420 batch concordancing 165 accents & symbols 420 batch processing accessing previous results 97 and Excel 39 accurate sort in WordList 315 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">494</span> 479 WordSmith Tools Manual settings 413 bibliography 417 Charles Babbage 458 Big5 372 check current version 24 blanking out entries 168 checking for updates 80 BNC handling of sentences and headings 146 Chinese Big5 372 selecting between texts 137 Chinese GB2312 372 selecting within texts 138 chi-square 245 tag file 141 chm files not visible 23 text format 435 Choose Languages BNC Sampler version 427 overview 7 boolean and/not 202 choosing files from standard dialogue box 51 bugs 417 choosing reference corpus 237 burstiness 446 choosing texts 42 .DOC files 42 - C - .DOCX files 42 .PDF files 42 .TXT files 42 calculating a plot 251 clear previous selection 44 call a concordance 234 Dickens text 44 calling other tools 438 store text files 44 cannot compare word-lists in different languages 468 classroom use class instructions 51 can't see Concord tags 455 setting up a training sesssion 51 CD-ROM university or school work 51 speed 448 storage 447 clipboard 422 clipboard advanced settings 36 change language of existing data 419 clumps changing font 78 regrouping 244 changing from edit to type-in mode 428 cluster Character Profiler definition 425 how to profile text 405 overview 6 cluster settings 310 purpose 405 clusters 448 settings 408 joining 283 reduction & merging 283 character sets 419 clusters in KeyWords 246 characters and letters accents window 28 Cocoa tags 368 wildcards 28 codepages 419 characters for different languages 420 codes 419 characters in save as text 222 codes in search-word 159 characters within word 125 collocate follow 184 chargram collocate minimum frequency & length 226 definition 425 collocates 179 display 181 chargram procedure 409 follow 184 Chargrams display 410 highlighting in concordance 186 horizons 180 overview 7 lemmas 183 purpose 409 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">495</span> Index 480 printing 211 collocates 179 purpose 158 minimum frequency 180 raw numbers 220 relationships 180 research uses 158 separated by search-word 222 saving 211 sorting 189 sorting 208 word clouds 189 sound and video 212 collocation 187 source text file 165 patterns 207 starting tips 14 settings 187 stretching the display to see more 165 specifications 187 student use 158 collocation associates 241 summary statistics 215 collocation breaks 188 teaching uses 158 colour categories 51 text segments 218 concordances 171 uniform plot 191 colouring specific characters 340 viewing options 220 colours what you see and can do 165 changing colours 60 wildcards 159 reset colour choices 60 zapping unwanted lines 206 column headings 71 concordance 159, 165 column marked green 111 advice 165 column tagged conversion 369 browsing original 165 column tagged mark-up 369 display 165 column totals 62 grow, shrink 165 columns in printing 80 highlighting collocates 186 comparing wordlists 260 padding 165 compute concordance from collocate 184 purple marks 165 compute keywords from a word list 259 settings 163 compute new column of data 63 concordance batch processing 163 concgrams 393 concordance characters lining up 422 filtering 402 concordances and colour categories 171 generating 395 concordancing Concord 165 multimedia 212 blanking 168 tags 198 breakdowns 215 Concord's save as characters 225 categories 168 consequence v. consequences 215 clusters 175 consistency analysis (detailed) 263 collocation 179 consistency analysis (simple) 262 creating exercises 168 consistency lists dispersion 191 sorting 268 hiding tags 220 contact addresses 425 index 158 context horizons 164 multiple search-words 161 context word 164, 202 nearest tag 199 contextual frequency sort 208 overview 5 context-word marking in text file 211 path visibility 220 controller (wordsmith exe file) 4 patterns 207 plot 191 Controller explained 27 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">496</span> WordSmith Tools Manual 481 definition of key key-word 240 convert data from old version 323 definition of sentence 426 convert within text files 362 definitions converter 354 chargrams 426 converting BNC XML version 374 clusters 426 converting Treetagger text 369 concgram 394 copy headings 426 all 66 paragraphs 426 choices 66 texts 426 selective 66 valid characters 425 specify 66 words 425 copy data to Word 422 deleting entries 129 Corpus Corruption Detector demo limit 456 aim 329 overview 7 demonstration version 427 process 329 detailed consistency 263 dice coefficient 268 corpus paragraph-count 298 relation statistics 268 corpus sentence-count 298 details of MSWord text 340 corpus word-count 298 dice coefficient correcting filenames 110 formula 433 count data frequencies 66 dice coefficient for detailed consistency 268 crash 417 Dickens text 446 cumulative scores 63 directories 431 curly quotes 367 dispersion 191 custom column headings 71 dispersion plot custom layouts sorting 210 removing 115 DOS to Windows 372 custom processing 67 download new version 24 cut spaces 222 downloaded text problems 354 cutting line starts 138 drag and drop 427 drop a text file onto WordSmith 427 - D - duplicate concordance lines 207 duplicate text files 348 data as text file 249 database construction 238 - E - database statistics 237 date format 425 edit mode 428 dates of texts 48 editing default colours 60 column headings 71 defaults 114 concordances 206 .ini files 113 delete if 71 network 113 delete to end 71 defining multimedia tags 147 random deletion of entries 70 definition reduce data to N entries 70 tokens 426 restore to end 71 types 426 reverse deletion 71 definition of associate 243 WordList entries 72 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">497</span> Index 482 links between Tools disrupted 469 encrypt your source texts 373 list 461 Entitities to characters 368 match list 469 Error messages must be a number 469 .ini file not found 462 mutual information 470 Administrator rights 462 network registration used elsewhere 470 base list error 463 no access to text file - in use elsewhere? 470 can only save words as ASCII 463 no associates found 470 cannot compute KWs 463 no clumps identified 470 can't call other tool 463 no clusters found 470 can't make folder as that's an existing filename no collocates found 470 463 no concordance entries found 471 can't merge list with itself! 463 no concordance stop list words 471 can't read file 463 no deleted lines to zap 471 character set reset to <x> to suit <language> 464 no entries in KeyWords database 471 concordance file is faulty 464 no fonts available 471 concordance stop list file not found 464 no key words found 471 confirmation messages - okay to re-read 464 no key words to plot 472 conversion file not found 464 no KeyWords stop list words 472 couldn't merge KW databases 476 no lemma list words 472 destination folder not found 465 no match list words 472 disk problem -- file not saved 465 no room for computed variable 472 dispersions go with concordances 465 no statistics available 472 drive not valid 465 no stop list words 472 expiry date 477 no such file(s) found 472 failed to access Internet 465 no tag list words 473 failed to create new folder 465 no word lists selected 473 failed to read file 465 not a current WordSmith file 473 failed to save 466 not a valid number 473 file access denied 466 not a WordSmith file 473 file contains "holes" 466 nothing activated 473 file contains none of the tags specified 466 only x% of words found in reference corpus 474 file not found 466 original text file needed but not found 474 filenames must differ 467 printer needed but not found 474 for use on pc named XXX 467 read-only folder 467 form incomplete 467 registration string is not correct 475 full drive & folder name needed 467 registration string must be 20 letters long 474 function not working properly yet 467 short of memory 475 invalid concordance file 467 source folder file(s) not found 475 invalid file name 468 stop list file not found 475 invalid KeyWords database file 468 stop list file not read 475 invalid KeyWords file 468 tag file not found 475 invalid WordList comparison file 468 tag file not read 475 invalid WordList file 468 the program needs Windows XP or greater 476 joining limit reached 468 this function is not yet ready 476 KeyWords database file is faulty 469 this is a demo version 476 KeyWords file is faulty 469 too many requests to ignore matching clumps 476 limit of file-based search-words reached 469 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">498</span> 483 WordSmith Tools Manual File Utilities Splitter Error messages bracket first line 343 too many sentences 476 end of text separator 343 truncating at xx words 476 end-of-text symbols 345 two files needed 476 filenames 344 version mis-match 477 index 343 why did search fail? 476 purpose 343 word list file not found 477 symbols 345 word list is faulty 477 WordList comparison file faulty 477 wildcards 345 WordSmith already running 477 File Viewer 340 XX days left 477 overview 8 example of aligning 381 file-based lemmatisation 273 example of key words 235 file-based search-words or phrases 161 Excel filename and path 220 column totals 102 filenames convert to .txt 371 display 113 editing 110 exercises 168 tab 113 exiting 101 export index data 286 file-types 428 export to spreadsheet etc. 102 find files containing words 269 external drive folder letters 431 find files with KWs 75 find which files contain a word or cluster 269 external hard drive 21 extracting from text files 359 finding a word word or part of a word 109 - F - finding by typing 109 finding relevant files 75 finding source texts 430 factory defaults 36, 115 factory settings 114 first use of WordSmith 446 favourite texts flash drive folders 431 loading 50 folder letters 431 saving 50 folder settings 78 file associations 428 folder view 80 File Utilities folders 431 compare 2 files 347 force detailed view 80 dates of texts 352 folders created using text converter 358 dodgy text 354 follow-up concordancing 171 editing text file dates 352 fonts file chunker 348 greek 78 find duplicates 348 russian 78 find holes in texts 354 force folders to show in detailed view 80 index 342 format test for text files 46 moving files to different folders 351 formulae overview 8 dice coefficient 433 rename 349 Log Likelihood 433 Splitter 8 MI 433 File Utilities Joiner MI3 433 joining text files 346 t-score 433 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">499</span> Index 484 HTML 435 formulae z-score 433 HTML headers cutting out 134 freeze columns 91 frequencies of suffixes 304 HTML/BNC/XML entities to characters 368 full lemma processing 181, 254 hyphen treatment 435 hyphens 125 - G - - I - GB2312 372 general settings 80 idioms 448 ignore punctuation 225 get favourite text selection 50 getting started 2 illegible 457 getting started with Concord 14 importing text into a word list 307 getting started with KeyWords 15 incompatibility between word lists 244 getting started with WordList 17 index lists export 286 globality of plot 446 making Wordlist Index 276 green marking in left column 111 uses 276 grow a concordance line 166 viewing 284 grow and shrink 165 index relationships 294 index settings 310 - H - information about WordSmith version 451 installing WordSmith Tools 21 handling hypens 125 instructions folder 23 handling multiple windows 127 interface 436 handling of numbers 125 international versions 436 handling Word .doc files 444 introduction to WordSmith Tools 2 hash representing words with numbers 318 inverted commas 455 header removing 134 it won't do what I want 455 headings definition 425 - J - start & end 146 hex 340 Japanese ShiftJis 372 hide tags 201, 222 joining clusters 283 hide words 222 joining entries 270 highlighting collocates in concordance 186 history list 97 - K - holes in file 466 horizons 180 key key word defined 240 hotkey combinations 439 key key-words 237 hotkeys key word procedure setting 254 Ctrl/F2 101 list 439 key word settings in Controller 254 space bar 168 keyboard 439 keyness how many words 437 definition 236 how much text 437 p value 235 how to build a database 238 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">500</span> 485 WordSmith Tools Manual characters within a word 124 keyness end of heading marker 124 thinking about 237 end of paragraph marker 124 key-ness defined 236 end of sentence marker 124 keyness scores 237 heading marker 124 keyword database headings (specifying) 124 related clusters 243 hyphens 124 statistics 237 numbers 124 KeyWords Languages Chooser advice 244 apostrophe 84 associates 243 font 86 calculation 245 language 84 choosing your files 232 new language 86 clumps 243 saving settings 87 clusters 246 sort order 86 compute a word list 249 Languages Chooser: overview 83 database 237, 238 disambiguation 243 layout alignment 87 display 252 column width 87 example 235 decimal places 87 failure/problems 244 editing column headings 87 finding KWs in other texts 75 grid format 87 index 229 headings 87 key key-words 240 save layout 87 links 247 typeface 87 overview 6 plot 251 lemma file 274 purpose 229 lemma list 274 sorting 252 lemma visibility settings 311 starting tips 15 lemmas 270 tips 244 auto-joining 273 file 274 keywords minimal processing 254 file-based 273 KeyWords plot display 249 joining automatically 273 joining manually 271 Korean Hangul 372 matching in WordList 274 KWs in other text files 75 template 273 visibility 311 - L - lemmatising source texts 373 lemmatising using a template 273 language letter-count 298 Baltic 81 Central European 81 licence details 22 change language in saved data 419 limitations 437 Cyrillic 81 links between tools 438 Greek 81 list of menu options 441 Portuguese 81 localisation 436 Russian 81 log file to trace problems 32 language settings log likelihood 245 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">501</span> Index 486 log likelihood 245 merge concordances 262 formula 433 merge wordlists 262 log likelihood computing 294 MI score 289 Log Likelihood score 289 MI3 formula 433 logging 32 lowest possible value for clusters 278 MI3 computing 294 LY endings in a word list 304 MI3 score 289 Microsoft Word 422, 444 - M - Minimal Pairs aim 331 choosing files 333 Mac version 440 output 336 machine requirements 440 overview 8 make a word list from keywords data 249 requirements 332 manual for WordSmith Tools 440 rules and settings 337 manual joining 271 running the program 338 manual lemmatisation 271 minimal processing (Concord) 226 mark_up minimal processing (KeyWords) 255 custom settings 133 modify source texts 149 custom settings for BNC tags 134 MS Word 444 document header removal 134 entity references 145 multimedia tags 147 tags as selectors 134 multiple file analysis 237 types of 145 multiple lists 39 marking 271 multi-word unit 149 mutual information marking context-word in txt 211 computing 294 marking entries formula 433 green margin 112 unmarking 112 mutual information scores 289 white margin 112 mutual information screen 289 marking search-word in txt 211 - N - mark-up 131 autoload tag file 132 colours 141 nag message 476 handling tag-types 132 nearest tag 199 HTML & SGML tags 132 negative keyness 236 making a tag file 141 negative keywords 254 multimedia 147 network settings 23 section tag 141 network speed 448 selecting between texts 137 network version 23 selecting within texts 138 networks match list defaults 113 filtering 92 new in version 6 4 mark words in a word list 92 new user 446 mean and standard deviation 298 n-grams 448 memory stick 21, 21 n-grams in WordList 278 memory usage 447 no hts within X words 222 menu choices 441 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">502</span> WordSmith Tools Manual 487 non-characters allowed within a word 125 phrases 448 notes 29 plain text 42 number handling 125 plot dispersion calculation 446 plot dispersion value 446 number of concordance entries 222 number sort 208 plot with grouped files 195 plots and links 247 numbering paragraphs Viewer & Aligner 386 plotting key words 251 numbering sentences precomposed characters 36 Viewer & Aligner 386 prefix frequencies 304 numbers previous lists 97 how treated 446 price 427 numbers in words: display 318 print preview zoom 97 - O - printer settings 80 printing blank print page 97 obtaining video and sound files 215 footer 97 omit # in clusters 278 header 97 online screenshots 4 landscape 97 only if containing 137 portrait 97 options for defaults 113 process text file if it contains X 137 ordering details 427 programming WordSmith 416 over-writing 355 prompts to save 36 Oxford University Press 427 punctuation breaks 188 - P - purple marks in word list display 318 - Q - p value 235 padding out the search-word with space 220 quitting 101 paragraph start & end 146 quotation marks 455 paragraph marker 124 - R - paragraph numbers Text Converter 362 RAM availability 447 paragraphs definition 425 randomised concordance entries 222 specifying 124 range 262, 263 pasting raw numbers 220, 221 as graphic or as text 422 raw numbers v. percentages 221 concordance into Word 422 re-compute filenames after zapping 129 paste special 422 recompute token count 297, 310 patterns recomputing plot 195 highlighting in concordance 186 reference corpus 447 pen drive 21, 21 registry 428 percentages v. raw numbers 221 regrouping clumps 244 permanent settings 113 relationship between collocate and search-word 180 phrase frames 281 relationship computing © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">503</span> Index 488 partial 101 relationship computing Log Likelihood 289 saving defaults 113 MI statistic 289 scripts 106 MI3 statistic 289 search & replace 110 T score 289 search by typing 428 Z score 289 searching 109 relationships computed from an index 294 searching by typing 109 relationships screen 289 search-word relationships: case sensitivity 181 advanced functions 163 removable drive 21 alternative search words 159 ascii codes for searching 159 remove all mark-up from a corpus 368 asterisk 159 remove custom layouts 115 boolean or 159 remove duplicates 207 case sensitivity 159 remove line-breaks 365 CHR 159 remove messages 115 file-based 161 remove some XML mark-up 370 history list 159 rename numerous files 349 slash 159 re-ordering 129 syntax 159 re-ordering word lists 72 text file specifying 161 repeated concordance lines 207 whole word 159 replacing 355 search-word marking in txt file 211 report on a crash 417 section Requirements 4 start & end 146 re-sorting selecting multiple entries 111 collocates 189 selecting within texts 138 restore all defaults 114 sentence restore factory defaults 36 auto handling 146 restore factory settings 114 start & end 146 restore last file 447 sentence breaks 188 restore last work 80 sentence lengths exporting 286 restore settings 115 sentence marker 124 restricted search 202 sentence only 222 ruler 249 sentences running words 425 definition 425 specifying 124 - S - separate search-words 222 Set column 168 save as set column colours 51 Excel 102 set textual date 48 HTML 102 settings 113 text 102 advanced 80 XML 102 colours 60 save favourite text file set 50 defaults 113 save prompts 36 folders 78 saving fonts 78 Ctrl/F2 101 general 80 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">504</span> WordSmith Tools Manual 489 word-count 298 settings 113 language 81, 124 statistics: vertical or horizontal 300 main 80 status bar 441, 449 permanent 113 statusbar 80 printer 80 stop at punctuation 188 restoring 115 stop at sentence break 188 show help 80 stop lists 120 statusbar 80 stop lists v. match lists 306 toolbar 80 stoplist.cod 362 ShiftJis 372 stopping 123 shortcuts 439 storage 447 show help at startup 113 suffix frequencies 304 show help file 80 summary statistics (general) 66 show or hide data below a minimum threshold 87 suspending processing 123 show or hide tags 201 swap tags and words 368 shrink a concordance line 166 symbols 420 SI numbers 125 single words 448 - T - slow 458 sorting T score 289 Concord 208 tag concordancing 198 consistency lists 268 tag file 141 dispersion plot 210 tag string only tags 143 KeyWords 252 tag types 145 tags 199 tag visibility 201 word list 315 tag-free corpus 368 sound & video tagged files 212 tagged text 131 sound file tags 147 tags source text(s) overview 131 view 116 tags in WordList 315 source texts 430 tags swapped with words 368 converting to a better format 365 tags to exclude 141 modify 149 tags to retain 141 speed 448 teacher instructions 51 SRT files TED talks 215 conversion 374 Test for Unicode 46 obtaining 215 test text file format 46 transcripts 215 text characteristics 124 standardised or mean type/token ratio 303 Text Converter start and end of sentence 146 .DOC 371 statistics .DOCX 371 headings 298 .Excel 371 letters 298 .PDF 371 paragraphs 298 <%IFDEF NEXT_TOPIC%>multi-word sections 298 linking<%ENDIF%> 9, 354, 355, 355, 359, 360, sentences 298 362, 363, 364, 365, 371, 374 word list statistics 298 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">505</span> Index 490 Concord 437 Text Converter KeyWords 437 asterisk 363 Text Converter 437 BNC XML version conversion 374 Viewer & Aligner 437 conversion file 362 WordList 437 cutting header 355 extracting 359 training students 51 folders 355 Treetagger conversion 369 index 355 Troubleshooting 455, 458 insert numbering 362 accented symbols 456 into Unicode 365 apostrophes not found 455 just one change 362 colours unreadable 457 line-breaks removal 365 column spacing 455 move if 360 Concord tags problem 455 multi-word linking 362 Concord/WordList mismatch 456 numbers: insert paragraph numbers in your corpus crashed 456 362 curly quotation marks 455 overview 9 demo limit 456 purpose 354 keys don't respond 457 removing all tags 363 pineapple-slicing 458 sample conversion file 364 potato-peeling machine 458 settings 355 printer won't print 458 syntax 363 quotation marks not found 455 Unicode conversion 365 smart quotations 455 UTF16 conversion 365 takes ages 458 UTF8 conversion 365 the English 458 wildcards 363 weird symbols 456 text date analysis 126 won't start 458 text file WordList out of order 459 use to build a word list 307 t-score formula 433 text file dates 352 text formats 124 T-score computing 294 text segments in Concord 218 Two word-list analysis 230 texts type choosing 44 definition 425 favourites 50 type and token more texts 44 definition 425 tie-breaking 208 type/token ratios 303 time-lines 126 type-in mode 428 title text 143 type-in search 109 to right only 295 types of tag 145 token typing characters into Concord 420 definition 425 token count 297 - U - token recomputing 310 toolbar 80, 441 undefined tags 222 tools for pattern-spotting 450 underscore 125 tool-specific limitations underscore tags 368 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">506</span> WordSmith Tools Manual 491 paragraph numbering 386 Unicode 419 purpose 379 unicode explained 430 reading in your plain text 387 Unicode test 46 sentence joining 389 Unix to Windows 373 sentence numbering 386 unjoin all entries 271 settings 390 unjoining entries 271 splitting 389 unmarking 271 technical aspects 390 unreadable 457 translation mis-matches 391 updater.exe 21 troubleshooting 391 updating WordSmith 80 unusual sentences 392 updating your version 21 viewing options 387 USB drive 21 viewing original text file 165 USB drive folders 431 viewing the original text 116 user licence 22 user-defined categories 168 - W - saving 149 user-defined processes 67 WebGetter UTF8 versus UTF16 430 display 326 limitations 328 - V - overview 12 purpose 324 valid character settings 325 definition 425 what is a concordance 159 value-added annotation 149 What's new 4 version 4 differences 452 whole word search 159 Version Checker why won't it... 455 overview 9 window management 127 version checking 24 windows version date 451 managing 127 version francaise 436 Windows file associations 428 vertical view of statistics 300 Windows XP 440 video word obtaining 215 definition 425 playing 212 word cloud settings Viewer & Aligner 381, 389 shape 60 adjusting with mouse 384 word clouds aligning 381 collocates 189 aligning -- an example 381 example 128 aligning the sentences 384 word count in MS Word 302 colours 387 Word documents 444 dual-text aligning 381 word patterns 207 editing 385 word separators 427 index 380 Word to .txt 371 Korean and English aligned text 381 word_TAG to <TAG>word 368 languages 385 WordList moving sentences 384 case sensitivity 314 overview 11 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">507</span> Index 492 wordsmith6.ini and networks 23 WordList clusters 278 WSConcGram coloured tags 315 aims 393 comparing word lists 260 definition of concgram 394 comparison display 261 display 397 compute keywords 259 exporting concgrams 404 create using text file 307 filtering 402 detailed consistency 263 generating concgrams 395 displaying comparisons 261 overview 12 find files containing a word 269 settings 395 finding entries 288 viewing 397 index 258 keys for searching 288 - X - locating entry-types 288 merging 262 X-letter word count 298 minimum & maximum settings 314 XML 435 n-grams 278 attributes 153 overview 6 entities 153 prefix for tag 315 parsing XML 153 purpose 258 text handling 153 searching using menu 288 XML simplification 370 simple consistency 262 sort order 315 - Y - sorting problems 459 starting tips 17 Yasumasa Someya 274 summary statistics 304 tags 315 - Z - tags as prefix 315 the basic display 318 Z score 289 WordList Index zapping clusters 310 filenames recomputed after 129 computing clusters 448 zip files 453 n-grams 448 z-score omit # 278 formula 433 relationship settings 310 Z-score computing 294 WordList: altering entries 72 Word's results are different 302 WordSmith controller Concord settings 222 index settings 310 KeyWords settings 254 WordList settings 311 wordsmith exe file (controller) 4 WordSmith Group 425 WordSmith Tools installation 21 manual 440 version 451 © 2015 Mike Scott</p> <p><span class="badge badge-info text-white mr-2">508</span> WordSmith Tools Manual</p> </div> </div> <div class="col-md-2"> <div> <script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> <!-- doc amp below related --> <ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2017906576985591" data-ad-slot="9235765260" data-ad-format="auto" data-full-width-responsive="true"></ins> <script> (adsbygoogle = window.adsbygoogle || []).push({}); </script> </div> <div> <h3>Related documents</h3> </div> </div> </div> </div> <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-5cc342bdc5d5f486"></script> <footer class="text-muted"> <div class="container"> <p class="float-right"> <a href="#">Back to top</a> </p> <p>2019 © DocMimic - <a href="/privacy">Privacy Policy</a> - <a href="/tos">Terms of Service</a></p> </div> </footer> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/cookieconsent2/3.1.0/cookieconsent.min.js"></script> <script> window.addEventListener("load", function(){ window.cookieconsent.initialise({ "palette": { "popup": { "background": "#252e39" }, "button": { "background": "#14a7d0" } }, "theme": "classic" })}); </script> </body> </html> |
proofpile-shard-0030-93 | {
"provenance": "003.jsonl.gz:94"
} | # Inequality between products of measures of sets
Let $X$ be a compact space and $\mu$ the Lebesgue measure, with $\mu(X)=1$. Let A and B be two subsets of $X$ with positive measure. What can I say about the relations between $\mu (A \cap B)$ and $\mu (A) \mu(B)$? Is one of the two quantities always larger of equal to the other one?
If not, how can I prove that, given a finite number of disjoint sets $C_\sigma$ such that $\bigcup_{\sigma} C_{\sigma}= X$, then
$$\sum_{\sigma}\mu(A \cap C_{\sigma}\cap B) \leq \sum_{\sigma} \frac{ \mu(A \cap C_{\sigma} ) } {\mu(A)} \frac{\mu( C_{\sigma} \cap B)}{\mu(C_{\sigma})}, ?$$
-
As you've noted in your taggings, you can formulate this in terms of probability. $\mu(A \cap B)$ is the probability that two events both occur, and $\mu(A) \mu(B)$ would be the probability that those two events occur given that they are independent – Christopher A. Wong Nov 12 '12 at 0:21
To answer your first question, there can be no inequality relationship between $\mu(A\cap B)$ and $\mu(A)\mu(B)$. Consider a set $A\subset X$ with $\mu(A)>0$. Then, take $B=A$ so that $\mu(A\cap B)=\mu(A)$, while $\mu(A)\mu(B)=(\mu(A))^2<\mu(A)$ since $\mu(X)=1$. Thus in this case we have $\mu(A\cap B)>\mu(A)\mu(B)$. Now take $A,B$ such that $A\cap B=\emptyset$ and $\mu(A),\mu(B)>0$. Then $0=\mu(A\cap B)<\mu(A)\mu(B)$. |
proofpile-shard-0030-94 | {
"provenance": "003.jsonl.gz:95"
} | # Popular Science Monthly/Volume 69/July 1906/Are the Elements Transmutable, The Atoms Divisible and Forms of Matter But Modes of Motion?
(1906)
Are the Elements Transmutable, The Atoms Divisible and Forms of Matter But Modes of Motion? by Samuel Lawrence Bigelow
ARE THE ELEMENTS TRANSMUTABLE, THE ATOMS DIVISIBLE AND FORMS OF MATTER BUT MODES OF MOTION?
By Professor S. L. BIGELOW
UNIVERSITY OF MICHIGAN
THE advance workers in chemistry and physics are constantly accumulating new facts and propounding new theories which must be digested and incorporated in the body of the sciences. The process of assimilation is often slow, and it is right that new and important facts should be vouched for by more than one investigator, and that a new theory should prove its usefulness before being placed beside old and tried facts and theories. But too often the effects of the advances are unduly delayed through a reluctance to revise old text-books or old lectures, perhaps not so much because of mere laziness, as because of a failure to appreciate the full force of the evidence in favor of new views, or of the advantages to be obtained by their adoption. The fact that the arguments for an innovation, for a time at least, are scattered through many journals, leads to an underestimate of their cumulative force.
It is the purpose of this article to gather the main facts, some old, many recent, most of them fairly generally known, which are compelling us to alter our old definitions, and to show what a strong argument they make in favor of believing in the transmutation of the elements, the divisibility of the atoms and that what we call matter is simply a mode of motion.
It is interesting to note the caution with which text-books express themselves when it is necessary to give definitions for these terms. By a careful choice of words most authors avoid making false statements, but they certainly do frequently lead their readers to unjustifiable conclusions. For instance, in Roscoe and Schorlemmer's 'Treatise on Chemistry,' issued in 1891, we find the definition, 'An atom is the smallest portion of matter which can enter into a chemical compound.' As is the usual custom, the ideas of the alchemists regarding the possibility of transmuting metals is held up to ridicule, and thus, by implication at least, the ultimate nature of the elements and the idea that the atom is indivisible are infallibly conveyed to the reader. A more recent instance is to be found even in the late editions of one of the most widely used texts on general inorganic chemistry. In this book, on page 4, we read, 'Molecules may be defined as the smallest particles of matter which can exist in the free state'; on page 5, 'Atoms are the smallest particles of matter which can take part in a chemical change'; on page 6, 'Molecules consisting of atoms of the same kind are termed elementary molecules, and substances whose molecules are so constituted are known as elements.' The numbers of the pages on which these statements occur are also significant. This reminds one of the methods of the old Greek philosophers, who pretended to solve all questions of science by pure deduction, positing some hypothesis, and then developing everything else by meditation in their closets, disdaining to disturb the order of their thoughts by experiments. But it is unworthy of the present age of inductive science, wherein every thought has, or should have, experimental evidence as its starting point. It can not be said that this particular author has made a false statement, but he has left the subject incomplete; cautiously reserving a loophole for his own escape, he fairly traps his readers. For it is inevitable that, with such didactic phraseology, and without having his attention called to the hypothetical, the tentative, nature of these definitions, the student should become convinced that the most fundamental facts of chemistry are that there are about eighty substances so simple that they can never be broken up into simpler things, and that all substances are composed of ultimate particles, called atoms, eternally indivisible.
A student started out with this hodgepodge of fact and theory thoroughly implanted in his mind as the basis for all his future knowledge is sadly handicapped, indeed he is intellectually maimed, and it may take him years to overcome the habit of confusing fact and theory, and to learn how to think straight; perhaps he never succeeds in overcoming it. This confusing of facts with theories is a vicious habit, which grows till it colors all one's thoughts, hinders the free play of the intellect, diminishes the power of right judgment and starts the ossification of the wits even before the age set by Dr. Osler.
It is not necessary to consider a student of chemistry as an infant in arms to be fed on predigested food. He may be assumed to have a digestive apparatus of his own. Give him the benefit of any doubt and ascribe to him at least a dawning intelligence, which, properly stimulated, may some day shed some light of its own. It is the characteristic course of a lazy teacher, and one pleasing to lazy students as well, to supply a lot of personal opinions in the shape of cut and dried definitions, so easy to memorize and, unfortunately, so hard to forget; phrases which do not require the intellect to bestir itself and exercise its faculty of criticism, to pass judgment for itself between alternative or conflicting views. Strictly speaking, nothing should be presented in the form of a definition except what is, in itself, a statement of experimental facts, as, for instance, we describe or define a unit of measurement in terms of other units. When dealing with a subject where more than one opinion is permissible, all should be stated, or at least the attention should be directed to the fact that others draw different conclusions from the same premises.
The average student is better able to face issues and weigh arguments than most of us realize, and it is more important to educate those falling below the average in this particular than in any other. We should state the facts and then reason in such a way as to teach students how to think. It is indispensable for them to learn to think for themselves. Great stores of chemical facts are of but little real use, unless accompanied by an ability to adapt and to apply them in new conditions, unforeseen by either teacher or student in school or university days, but surely coming in after life. It is the prime necessity for research work or for originality of any kind, and we all are willing to admit that originality is what should be cultivated.
There is a great difference between the phrases, 'elements are substances which can not be broken up' and 'elements are substances which we have not as yet succeeded in breaking up'; and we should mark well the difference. This caution, lest we slip into the error of stating as fact more than we really know, is the distinguishing difference between the chemistry of to-day and the chemistry of a few years ago. It is more than this, it expresses concisely the difference between the way in which any science should be taught and studied, and the way in which it should be neither taught nor studied.
This particular differentiation between two definitions of the term element has been more than justified by the results which have followed the last ten years' work in pure chemistry, spectroscopy, radioactivity and Röntgenology (a term which has been seriously proposed by one of that fraternity which seems to consider its main function in life to be the coinage of new words).
The main arguments which may be marshaled in favor of considering the elements as ultimates, and the atoms as indivisible consist:
First, of all those facts which Dalton condensed into the laws of definite and multiple proportions, and to which there have been as many additions as there have been analyses and syntheses made before or since his time.
Second, Dulong and Petit's law that the atomic heats of all solid elements are the same.
Third, the isomorphism of many compounds containing similar elements, a phenomenon discovered by Mitscherlich.
Fourth, Faraday's law, that equivalent quantities of the elements are deposited at the electrodes during electrolysis.
Truly, an imposing array of evidence, and more than sufficient to justify us in making the assumption that atoms exist. But curiously enough, there is not one item amongst all these facts compelling us to believe that these atoms are the ultimate constituents, or that they are indivisible. These latter hypotheses are purely gratuitous, tacked on by Dalton and retained by succeeding chemists and physicists for no good reason. Perhaps because imitation is a characteristic inherited from our simian ancestry, and is so much easier for us than originality. Many a chemist looks askance at any tampering with the atoms, apparently fearing that it may hurt them, or even destroy them utterly and the atomic weights with them. Or he trembles for his spidery and tenuous structural formula?, knowing full well that if deprived of these he will be irretrievably lost in a labyrinth, without a thread to guide him. While, if he is not permitted to think of the carbon atom as a little chunk of matter, tetrahedral in form, he thinks he is launched on a sea of troubles.
, But all this apprehension arises from a misunderstanding. That the atomic weights remain unharmed and unaltered, as the units for chemical calculations, and that nothing which is good or useful about the atomic theory is destroyed or even assailed by the new ideas, that the trend of these new ideas is unmistakably constructive and not destructive, are best shown by a review of the arguments in favor of the hypothesis that the atom is divisible, and that our elements are not elements in the true sense of the word.
There is nothing new in this view; it formed the first article of the faith of the alchemists. It was unqualifiedly denied by Dalton, and fell into such disrepute that even within recent years one risked being called a dreamer, or even a fool, if he dared to consider it possible. Here again is an instance of the desirability of being as precise as possible in the use of terms. Many believe experimental evidence of the complexity of 'elementary atoms' and the existence of one 'mother substance' must be followed immediately by directions for transforming elements into one another; by the transmutation of baser metals into gold. But these are two wholly distinct propositions. An astronomer might locate a mountain of gold on the surface of the moon, but there would still be a goodly chasm to bridge before he derived much material benefit from his discovery!
The idea that there is one fundamental substance would not down. The hypothesis of the English physician, Prout, is a familiar one. When the atomic weight of hydrogen is set equal to unity, the atomic weights of all the other elements come out remarkably close to whole numbers. There exist numerous groups of three elements, commonly called Döbereiner's triads, the individual members of one group being similar in their chemical properties, and so related that the atomic weight of the middle member is the arithmetic mean of the atomic weights of the extreme members. These are the facts which led Prout to suggest that there was but one element, namely, hydrogen, the others being complexes containing different quantities of this ultimate substance. It followed that the differences between the atomic weights and whole numbers were to be ascribed to experimental errors in the determination of these values. The desire to test this hypothesis was one of the chief motives for some of the most careful determinations of atomic weights which have ever been made. These determinations resulted in proving that the divergences of the atomic weights from whole numbers were greater than could be accounted for on the basis of experimental errors. This precluded the possibility that the atom of hydrogen was the common ultimate unit, but did not dispose of the possibility that a half, or quarter, or some other fraction, of the hydrogen atom might play that rôle.
In 1901 Strutt[1] applied the mathematical methods of the theory of probabilities to the most accurately determined atomic weights, and calculated that the chance that they should fall as close to whole numbers as they do was only one in one thousand. The inference from this is that it is not a matter of chance, but that there is a regularity in the atomic weights which we do not understand; a regularity which points to the probability that our elements are complex substances, constructed according to some system, from some simpler substance.
All the facts comprised in that great generalization, the periodic law, which states that the properties of the elements, both chemical and physical, are functions of their atomic weights, and most of them are periodic functions, point unmistakably to the same conclusion.
The evidence from spectroscopic analysis is so abundant that it is not easy to compress it into a few general statements.
In the first place, the spectrum of each of our elements consists of numerous lines, a fact not exactly compatible with the notion of extreme simplicity of the particles emitting the light.
In the second place, one and the same element, contrary to common belief, frequently has two or three distinctly different spectra, the particular spectrum which appears depending upon the pressure and the temperature at which the element is while emitting the light. In fact the extraordinary spectroscopic results obtained when highly rarefied gases enclosed in tubes (variously called Plücker, Hittorf, Geissler or Crookes tubes) were made luminous by the passage of high potential electricity, induced Crookes to suggest in 1887 a theory that the elements were all built up by gradual condensation with falling temperature from a fundamental substance to which he gave the name protyl.[2]
In the third place, the lines in the spectrum of one element may be separated out into several series. Each line corresponds, as is well known, to light of a definite wave length. The wave lengths of the lines comprised in one series are related to each other in such a way that a general formula may be derived for them. This means that, given some of the lines, the wave lengths, and thus the positions, of other lines belonging in the same series may be calculated. In this way the positions of certain lines for certain elements were foretold. Search failed to reveal all of them in light emitted by the element at any temperature producible in the laboratory. But some of the missing lines have been found in the spectra of the hottest stars, stars far hotter than our sun. At the same time many of the lines obtained by terrestrial means are lacking in the spectra of these stars. We have ample experimental evidence that many complex substances dissociate, as we call it, into less complex substances within the temperature range readily controlled in the laboratory. The inference is right at hand that at extreme, at stellar, temperatures our elements themselves are dissociated into simpler substances. To these substances, our elements, in this other condition, have been given their customary names, but with the prefix proto. Thanks to the introduction of Rowland's diffraction gratings for the study of these spectra, we have observations indicating the existence of proto hydrogen, proto calcium, proto magnesium, proto iron and so on through a list of a dozen or more 'proto' elements.[3]
Continuation of the work upon which Crookes was engaged resulted in the discovery of the X-rays by Röntgen in 1895. This date may be said to mark a new era in many of our conceptions regarding the universe about us. To J. J. Thomson, professor of physics at Cambridge, England, we owe the greater part of our present knowledge of the cathode rays. He devised most of the experiments and the ingenious, but strictly logical, reasoning which justify us in supposing that these cathode rays consist of swarms of minute particles, which he called corpuscles (reviving an old term and an old theory of Isaac Xewton's); particles moving with velocities approaching that of light, each one carrying a charge of what we call negative electricity. He, and those working with him, determined the quantity of this electrical charge to be the same on each corpuscle, and to be the same as the charge we have good reason to suppose is carried by any monovalent ion in solution. By several methods the approximate number of these particles in a given volume and the weight of the individual particle were estimated. This weight appears to be about one eight-hundredth of the weight generally ascribed to the hydrogen atom, the lightest of all the atoms. It may be objected that there is no positive proof of the existence of these corpuscles, nor do we know the weight or mass of one of them. That is very true, but neither have we positive proof of the existence of atoms, nor do we know the weight of one atom. ^Ye can only say that the evidence makes the existence of these minute individuals, atoms and corpuscles extremely plausible, and makes one as plausible as the other.
Grant that we have discovered particles—in round numbers one thousandth part the size and weight of the hydrogen atom—the argument is still not complete for the divisibility of the atom. Perhaps we have found a new element. But cathode rays were produced under circumstances where they must have arisen from the cathode itself, and it is hard to escape from the conclusion that the atoms of the cathode disintegrated to a certain extent to furnish these particles. Furthermore, rays have been studied having as their sources different metals under the influence of electrical currents, different metals heated to incandescence, flames of different kinds and ultra-violet light; and these rays appear to consist of corpuscles of the same weight, no matter what their source. This makes it difficult to escape from the further conclusion that atoms of a great variety of natures are capable of disintegrating and of furnishing the same product by the disintegration;[4] and this is as much as to say that instead of about eighty different elements we have one 'mother substance,' and Prout's hypothesis is once more very much alive, somewhat modified, it is true, and in a new garb, better suited to the present fashions.
It remains to rehearse briefly the evidence to be obtained from radio-active phenomena. In the first place, the rays incessantly sent out from these extraordinary substances consist, at least in part, of rays like the cathode rays, and are streams of the same kind of corpuscles, but, on the whole, traveling with greater velocities than the corpuscles of the cathode rays. It has been proved by Rutherford and Soddy that the emission of the radiations from these substances is accompanied by a disintegration, or decay, as they describe it, of the substances themselves. These investigators have caught some of the products of this decay and have studied their properties. These products themselves decay, some slowly, some rapidly, sending forth other rays and furnishing new products to decay in turn. Indeed each new issue of a scientific journal for the past few years seems to chronicle the birth, life and death of a fresh radio-active substance. The rate at which new offspring of radium, thorium and allied elements are discovered and studied during their fleeting existences reminds one of nothing so much as the genealogy of Noah as given in the fifth chapter of Genesis.[5]
These products appear to be elements, and this idea that some elements may have existences of but short duration, from a few seconds to many years, is a decidedly novel one. It has been suggested that this may account for some of the vacant spaces in our periodic table of the elements, particularly in the neighborhood of thorium, radium and uranium. Perhaps these spaces never will be occupied except by transients. Indeed it is not impossible that all our elements are mere transients, mere conditions of things, all undergoing change. But there is no immediate danger of their all vanishing away in the form of rays and emanations. Rutherford has calculated that radium will be half transformed in about 1,300 years, that uranium will be half transformed in ${\displaystyle 6\times 10^{8}}$ years, and thorium in about ${\displaystyle 2.4\times 10^{9}}$ years. We may safely say the other elements are decaying much more slowly, so we may continue to direct our anxieties towards the probable duration of our coal beds and deposits of iron ore as matters of more present concern.
The objection may be raised that perhaps radium should not be classed as an element, but rather should be considered as an unstable compound in the act of breaking down into its elements. But the answer to this objection is at hand. The evolution of energy accompanying these changes is far in excess of that obtainable from any known chemical process, so far in excess that it is certain we are dealing with a source of energy hitherto unknown to us, with a wholly new class of phenomena. The following quotation from Whetham[6] will convey an adequate conception of the magnitude of the forces at work here:
It is possible to determine the mass and the velocity of the projected particles, and. therefore, to calculate their kinetic energy. From the principles of the molecular theory, we know that the number of atoms in a gram of a solid material is about ${\displaystyle 10^{20}}$. Four or five successive stages in the disintegration of radium have been recognized, and, on the assumption that each of these involves the emission of only one particle, the total energy of radiation which one gram of radium could furnish if entirely disintegrated seems to be enough to raise the temperature of 10 s grams, or about 100 tons, of water through one degree centigrade. This is an underestimate; it is possible that it should be increased ten or a hundred times. As a mean value, we may say that, in mechanical units, the energy available for radiation in one ounce of radium is sufficient to raise a weight of something like ten thousand tons one mile high.
Again,
the energy liberated by a given amount of radioactive change. . . is at least 500,000 times, and may be 10,000,000 times, greater than that involved in the most energetic chemical action known.
The theory that the source of most of the sun's energy is a decay of elements analogous to radium, to disintegration of atoms, is acknowledged to account better than any previous theory for the great quantity of this energy which we observe, and for the length of time during which it must have been given off according to the evidences of geology.
There is no chemical reaction which is not hastened or retarded by a change in temperature. In general, the velocity of a chemical reaction is increased by an elevation of the temperature and diminished by a reduction of the temperature. But radium compounds emit their rays undisturbed, at an even, unaltered rate, whether they be heated to a high temperature or cooled by immersion in liquid hydrogen and, what is perhaps equally striking, whether they are in the solid state or dissolved in some solvent.
In view of such facts as these, it is idle to suppose that radium is an unstable compound decomposing into its elements, using the terms compound and element in their usual sense. Conflict as it may with preconceived opinions, we seem forced to concede, not only that the transmutation of the elements is possible, but also that these transmutations are going on under our very eyes.
As has already been pointed out, this does not mean that we shall shortly be able to convert our elements into each other. Far from it, up to the present time we have not the slightest idea how to initiate such a process nor how to stop it. We can not, by any means known to us, even alter the rate at which it proceeds.
Now how shall we fit all these new facts and ideas in with our old ones regarding the elements and atoms, and how many of the old ideas must be discarded? Brief consideration is enough to convince us that very few of the old ideas, in fact none of value, need be sacrificed. We must indeed grant that Dalton's fundamental assumption is false, that the atom, in spite of its name, is divisible, and consequently that our elements are not our simplest substances, but decidedly complicated complexes. But all the facts included in the laws of definite and multiple proportions remain fixed and reliable, as indeed must all facts, expressions of actual experimental results, no matter what else varies. And there is not the least necessity for altering the methods of using atomic weights in calculations, nor for ceasing to use structural diagrams and models for molecules. We must merely modify our ideas and definition of an atom, and this modification is in the direction of an advance. We know more about an atom, or think we do.
Assume the inferences from the evidence just reviewed to be correct, and how do they affect our conception of the atom? First of all, our smallest, lightest, simplest atom, that of hydrogen, becomes an aggregation of about eight hundred smaller particles or corpuscles, and the atoms of other elements become aggregations of as many corpuscles as are obtained by multiplying the atomic weight of the element by eight hundred. Thus the atom of mercury must be thought of as containing 800 times 200, or 160,000, corpuscles. Next, the methods by which we believe we can calculate the approximate size of atoms and corpuscles give us values which enable us to make such comparisons as the following, suggested by Sir Oliver Lodge: 'The corpuscle is so small as compared to the atom that it, within the atom, may be likened to a mouse in a cathedral,' or 'the corpuscle is to the whole atom as the earth and other planets are to the whole solar system.'
These corpuscles are probably gyrating about each other, or about some common center, with velocities approaching that of light. It seems needful to suppose this, for it is hard to imagine that the enormous velocities observed could be imparted to a corpuscle at the instant it leaves the atom to become a constituent of a cathode ray. It is more reasonable to imagine that the corpuscle already had this velocity and that it flew off at a tangent owing to some influence we do not understand.
This may appear, after all, to be little more than pushing back our questions one stage, so that the position occupied in our thoughts but yesterday by the atom is now occupied by the corpuscle. Quite true, but this is in itself a great step, for the advancement of knowledge consists of nothing else than such pushing back of the boundaries. We dare not assume the end is reached, for there is no proof that the corpuscles are ultimate. They mark the present limit of our imaginings based on experiment, but no one can say but what the next century may possibly witness the shattering of the corpuscles into as many parts as it now appears to take to make an atom.
The question is a legitimate one, do we know any more about these 'new-fangled' corpuscles than we did about the old atoms? The answer is, yes, we probably do. We can go further in our reasoning on the basis of the properties of the corpuscles, and arrive at results which are startling when heard for the first time.
Lenard[7] has shown that the absorption of cathode rays by different substances is simply proportional to the specific gravity of those substances and independent of their chemical properties. It is even independent of the condition of aggregation, i. e., whether the absorbing substance be investigated as a gas, as a liquid or as a solid. This is another strong argument in favor of the view that there is but one 'mother substance' which consists of corpuscles. The corpuscles of the cathode rays must be considered as passing unimpeded through the interstices between the corpuscles of the atom. Lenard calls the corpuscles dynamides and considers them as fields of electrical force with impenetrable central bodies which then constitute actual matter. He calculates the diameter of this center of actual matter as smaller than ${\displaystyle 0.3\times 10^{-10}(=0.000,000,000,03)}$ millimeter. Applying these results to the case of the metal platinum, one of the most dense of the metals, one of those with the highest specific gravity, he concludes that a solid cubic meter of platinum is in truth an empty space, with the exception of, at the outside, one cubic millimeter occupied by the actual matter of the dynamides.
If we can thus reasonably and mathematically eliminate the matter of a cubic meter of one of our densest metals to such an extent, it should not be very difficult to make one more effort and eliminate that insignificant little cubic millimeter still remaining, and say, with cogent reasons behind us for the statement, that there is no matter at all, but simply energy in motion. This is exactly what has been done by many who occupy high and authoritative positions in the scientific world.
Long before experimental evidence of the existence of corpuscles had been obtained, it was demonstrated that an electrically charged body, moving with high velocity, had an apparent mass greater than its true mass, because of the electrical charge. The faster it moved the greater became its apparent mass or, what comes to the same thing, assuming the electrical charge to remain unaltered, the greater the velocity the less the mass of the body carrying the charge needed to be to have always the same apparent mass. It was calculated that when the velocity equaled that of light, it was not necessary to assume that the body carrying the charge had any mass at all! In other words, the bit of electric charge moving with the velocity of light would have weight and all the properties of mass.
This might be looked upon as an interesting mathematical abstraction, but without any practical application, if it were not for the fact that Kaufmann[8] determined the apparent masses of corpuscles shot out from a radium preparation at different velocities, and compared them with the masses calculated on the basis that the whole of the mass was due to the electric charge. The agreement between the observed and calculated values is so close that it leads Thomson to say: "These results support the view that the whole mass of these electrified particles arises from their charge."[9]
Then the corpuscles are to be looked upon as nothing but bits of electric charge, not attached to matter at all, just bits of electric charge, nothing more nor less. It is this view which has led to the introduction of the term electron, first proposed by Stoney, to indicate in the name itself the immaterial nature of these ultimates of our present knowledge. We have but to concede the logical sequence of this reasoning, all based on experimental evidence, and the last stronghold of the materialists is carried, and we have a universe of energy in which matter has no necessary part.
If we accept the electron theory, our atoms are to be considered as consisting of bits of electric charge in rapid motion, owing their special properties to the number of such bits within them, and also, no doubt, to the particular orbits described by the electrons. If space permitted it would be interesting to show how admirably the periodicity of the properties of the elements, as expressed in Mendelejeff's table, can be accounted for on the basis of an increasing number of like electrons constituting the atoms of the successive elements. We have molecules consisting, at the simplest, of two such systems within the sphere of each other's attraction, perhaps something as we have double stars in the heavens.
A possible explanation of the puzzling property of valence is offered, in that an atom less one electron, or plus one electron, may be considered as electrically charged, and therefore capable of attracting other bodies, oppositely charged, to form electrically neutral systems. An atom less two electrons, or with two electrons in excess, would have twice the ability to combine, it would be what we call divalent, and so on. An electronic structure of the atom furnishes a basis from which a plausible explanation of the refraction, polarization and rotation of the plane of polarized light may be logically derived. Hitherto explanations for the observed facts have been either wanting or more or less unsatisfactory. For instance, grant the actual existence of tetrahedral carbon atoms, with different groups asymmetrically arranged at the apices, and yet we can not see any good and valid reason why such a structure should be able to rotate the plane of polarized light. Grant that the molecule consists of systems of corpuscles traveling in well-defined orbits, and we see at once how light, consisting of other electrons of the same kind, traversing this maze, must be influenced.
Adopting this theory of corpuscles or electrons, not a concept of any value need be abandoned. On the contrary, the theory furnishes us with plausible explanations of many facts previously unexplained. Its influence is all in a forward direction towards a simplification and unification of our knowledge of nature.
A few words must be said regarding the old, the threadbare, argument which, of course, is cited against the electron theory. The materialist says he simply can not accept a theory which obliges him to give up the idea of the existence of matter; he says the table is there because he can see it and feel it and that must end the discussion for any one with common sense and moderately good judgment. Now it is the reverse of common sense to let that end the discussion, and our materialist is pluming himself on precisely those qualities which he most conspicuously lacks. He assumes the obnoxious theory to involve consequences which it does not involve and then condemns it because of those consequences. As a rule it is because he knows little about it, and has thought less, that he assumes the electron theory to be pure idealism in an ingenious disguise, that form of idealism which asserts that there is no universe outside ourselves and that everything is a figment of the imagination of the observer. The electron theory postulates a universe of energy outside ourselves. It does not deny the existence of the table; quite the reverse, it asserts it and then offers a detailed description of it, and why it has the properties which it has. This is more than any materialistic theory can do. The electron theory affirms the existence of what we ordinarily call matter. It defines, describes, explains these things, ordinarily called matter, in a clear and logical manner, on the basis of experimental evidence, as a mode of motion. It opposes the use of the word matter, solely because that word has come to stand, not only for the object, but also for the assumption that there is something there which is not energy.
Another groundless objection is offered by the materialists. They say this electron theory is clever, perhaps plausible, but very vague and hopelessly theoretical. Of course it is theoretical, but it is a theory more intimately connected with experimental facts than any other theory regarding the ultimate constituents. One departs further from known facts in assuming the existence of a something to be called matter. What is this matter which so many insist that we must assume? No one can define it otherwise than in terms of energy. But forms of energy are not matter as the materialist understands the word. Starting with any object and removing one by one its properties, indubitably forms of energy, we are finally left with a blank, a sort of a hole in creation, which the imagination is totally unable to fill in. The last resort is the time-honored definition, 'matter is the carrier of energy' but it is impossible to describe it. The assumption that matter exists is made then because there must be a carrier of energy. But why must there be a carrier of energy? This is an assertion, pure and simple, with no experimental backing. Before we have a right to make it we should obtain some matter 'strictly pure' and free from any energy, or, at least, we should be able to demonstrate on some object what part of it is the energy and what part the matter, the carrier of the energy. We have not done this, we have never demonstrated anything but forms of energy, and so we have no evidence that there is any such thing as matter. To say that it exists is theorizing without experimental evidence as a basis. The materialistic theory postulates energy and also matter, both theoretical if you will; the electron theory postulates energy only. Therefore the electron theory is the less theoretical and the less vague of the two.
From the philosophical standpoint, having deprived an object of all that we know about it, all forms of energy, there remains what may be called the 'residuum of the unknown.' We are not justified in saying that nothing remains; we can only say nothing remains which affects, either directly or indirectly, any of our senses through which we become cognizant of the external universe. If the materialist takes the stand that this unknown residuum is what he calls matter, although any other name would be equally appropriate, it must be acknowledged that his position is at present impregnable, and that sort of matter exists. But it is nothing with which experimental science can deal. A fair statement would appear to be: The electron theory accounts for, or may be made to account for, all known facts. Besides these there is a vast unknown within whose precincts matter may or may not exist.
Michael Faraday is acknowledged to have been one of the ablest of experimenters and clearest of thinkers. His predominant characteristic may be said to be the caution which he used in expressing views reaching beyond the domain of experimental facts. His authority rightly carries great weight, and it is therefore of particular significance that he expressed himself more definitely upon these questions than appears to be generally known. In an article published in 1814[10] he says:
If we must assume at all, as indeed in a branch of knowledge like the present we can hardly help it, then the safest course appears to be to assume as little as possible, and in that respect the atoms of Boscovich appear to me to have a great advantage over the more usual notion. His atoms, if I understand aright, are mere centers of forces or powers, not particles of matter, in which the powers themselves reside. If, in the ordinary view of atoms, we call the particle of matter away from the powers a, and the system of powers or forces in and around it m, then in Boscovich's theory a disappears, or is a mere mathematical point, whilst in the usual notion it is a little unchangeable, impenetrable piece of matter, and m is an atmosphere of force grouped around it. . . . To my mind, therefore, the a or nucleus vanishes, and the substance consists of the powers or m; and indeed what notion can we form of the nucleus independent of its powers? All our perception and knowledge of the atom, and even our fancy, is limited to ideas of its powers: what thought remains on which to hang the imagination of an a independent of the acknowledged forces? A mind just entering on the subject may consider it difficult to think of the powers of matter independent of a separate something to be called the matter, but it is certainly far more difficult, and indeed impossible, to think of or imagine that matter independent of the powers. Now the powers we know and recognize in every phenomenon of the creation, the abstract matter in none; why then assume the existence of that of which we are ignorant, which we can not conceive, and for which there is no philosophical necessity?
There is a striking analogy between the present condition of our science and our discussions, and those prevailing in the latter half of the eighteenth century when the phlogiston theory was almost universally accepted. We all now believe that heat is a mode of motion and smile at the thought that there were those who considered heat as a material. The materialistic theory is the phlogiston theory of our day, and perhaps the time is not far distant when the same indulgent smile will be provoked by the thought that there were those unwilling to believe that matter is a mode of motion.
1. R. J. Strutt, Philosophical Magazine, March, 1901, p. 311.
2. 'The Genesis of the Elements,' W. Crookes.
3. The methods, facts and reasonings relating to this spectroscopic evidence are interestingly given in 'Inorganic Evolution' by Sir Norman Lockyer.
4. Experimental details, and also comprehensive treatments of the subject as a whole and of special parts, may be found in three books by J. J. Thomson: 'The Discharge of Electricity through Gases' (based on lectures given at Princeton University in October, 1896); 'Conduction of Electricity through Gases' (a larger book); 'Electricity and Matter' (lectures delivered at Yale University in 1903).
5. It is an indication of the widespread interest in this subject, and of the activity of the workers in this field, that one journal, in the year 1905, contained no less than 167 abstracts of articles upon radioactive phenomena. E. Rutherford's book, 'Radio-activity,' 2d edition, 1905, is a masterly survey of the whole subject.
6. 'The Recent Development of Physical Science,' W. C. D. Whetham.
7. Wied. Annal., 56, p. 255 (1895), and Drudes Annul., 12, 714 (1903).
8. Phys. Zeitschr., 1902, p. 54.
9. 'Electricity and Matter,' p. 48.
10. 'Experimental Researches in Electricity,' Michael Faraday, Vol. 2, pp. 289-91. |
proofpile-shard-0030-95 | {
"provenance": "003.jsonl.gz:96"
} | For each function $$f(x)$$, say which of the choices are anti-derivatives $$F(x)$$.
• $$f(x) = x$$
1. $$F(x) = 2 x + 3.5$$
2. $$F(x) = 2 x^2 - 1$$
3. $$F(x) = \frac{1}{2} x^2 + 5$$
4. $$F(x) = \frac{1}{2} x$$
• $$f(x) = e^{-k x} \sin(\frac{2 \pi}{P} x)$$
1. $$F(x) = - \frac{P e^{-kx}\left(k P \sin(\frac{2\pi}{P}x) + 2 \pi \cos(\frac{2 \pi}{P}x) \right)}{k^2 P^2 + 4 \pi^2}$$ |
proofpile-shard-0030-96 | {
"provenance": "003.jsonl.gz:97"
} | # NCERT Class 11-Math՚s: Exemplar Chapter – 16 Probability Part 8 (For CBSE, ICSE, IAS, NET, NRA 2023)
Glide to success with Doorsteptutor material for CBSE/Class-9 : get questions, notes, tests, video lectures and more- for all subjects of CBSE/Class-9.
Question 11:
The accompanying Venn diagram shows three events, A, B, and C, and also the probabilities of the various intersections (for instance, . Determine
(a)
(b)
(c)
(d)
(e)
(f) Probability of exactly one of the three occurs.
(a)
(b)
(c)
(d)
(e)
(f)
Question 12:
One urn contains two black balls (labelled B1 and B2) and one white ball. A second urn contains one black ball and two white balls (labelled W1 and W2) . Suppose the following experiment is performed. One of the two urns is chosen at random. Next a ball is randomly chosen from the urn. Then a second ball is chosen at random from the same urn without replacing the first ball.
(a) Write the sample space showing all possible outcomes
(b) What is the probability that two black balls are chosen?
(c) What is the probability that two balls of opposite colour are chosen?
(a)
(b)
(c)
Question 13:
A bag contains red and white balls. Three balls are drawn at random. Find the Probability that
(a) All the three balls are white
(b) All the three balls are red
(c) One ball is red and two balls are white
(a)
(b)
(c)
Question 14:
If the letters of the word ASSASSINATION are arranged at random. Find the Probability that
(a) Four S՚s come consecutively in the word
(b) Two I՚s and two N՚s come together
(c) All A՚s are not coming together
(d) No two A՚s are coming together.
(a)
(b)
(c)
(d)
Question 15:
A card is drawn from a deck of cards. Find the probability of getting a king or a heart or a red card. |
proofpile-shard-0030-97 | {
"provenance": "003.jsonl.gz:98"
} | # Math Help - Integer basic proofs
1. ## Integer basic proofs
Hello,
Can anyone help me with any of the foolowing proofs
For any integers a,b,c,d
1. a | 0, 1 | a, a | a.
2. a | 1 if and only if a=+/-1.
3. If a | b and c | d, then ac | bd.
4. If a | b and b | c, then a | c.
5. a | b and b | a if and only if a=+/-b.
6. If a | b and b is not zero, then |a| < |b|.
7. a+b is integer, a.b is integer
Thank you
2. Originally Posted by Dili
Hello,
Can anyone help me with any of the foolowing proofs
For any integers a,b,c,d
1. a | 0, 1 | a, a | a.
2. a | 1 if and only if a=+/-1.
3. If a | b and c | d, then ac | bd.
4. If a | b and b | c, then a | c.
5. a | b and b | a if and only if a=+/-b.
6. If a | b and b is not zero, then |a| < |b|.
7. a+b is integer, a.b is integer
Thank you
Let me do the first ones.
Definition: Let $a,b\in \mathbb{Z}$ then we define $a|b$ iff $b=ac \mbox{ for some }c\in \mathbb{Z}$.
Theorem: Let $a\in \mathbb{Z}$ then $a|0$.
Proof: We need to show $0=ac$ for some $c\in \mathbb{Z}$. So choose $c=0$.Q>E>D>
Theorem: Let $a\in \mathbb{Z}$ then $1|a$.
Proof: We need to show $a=1c$ for some $c\in \mathbb{Z}$. So choose $c=a$.Q.E.D.
Theorem: Let $a\in \mathbb{Z}$ then $a|a$.
Proof: We need to show $a=ac$ for some $c\in \mathbb{Z}$. So chose $c=1$.Q.E.D.
Theorem: Let $a\in \mathbb{Z}$ and $a|1$ then $a=\pm 1$.
Proof: We need to find $1=ac$ for some $c\in \mathbb{Z}$. If $|a|\geq 2$ then $|ac|>1$ which is impossible. If $a=0$ it is impossible. So the only possible case is $|a|=1$.Q.E.D.
3. Thank you.
I shall welcome any other proofs to the others especially the 7th one
ie. a+b and a.b are integers
4. Originally Posted by Dili
ie. a+b and a.b are integers
That has nothing to do with number theory. That is more of a Set Theory question. So I have no idea what you want with that one.
5. For the seventh one, isnt that the very definition of a field? Really I think its a closure axiom, so you dont need to prove it.
6. Originally Posted by tukeywilliams
For the seventh one, isnt that the very definition of a field?
The integers are not a field.
Really I think its a closure axiom, so you dont need to prove it.
That is not how this axiom thing works, i.e. you cannot simply say it is an axiom. Before you state something is closed, you need to actually show it is closed. You cannot just say that is an axiom, that makes no sense.
-------
This question (7) should just be avoided. It has nothing to do with number theory. |
proofpile-shard-0030-98 | {
"provenance": "003.jsonl.gz:99"
} | +0
0
87
1
Suppose the equation 5x²+x-3=0 has roots r and s. Find the value of r²-s².
Jan 20, 2022
#1
+13577
0
Find the value of r²-s².
Hello Guest!
$$5x²+x-3=0\\ x^2+0.2x-0.6=0\\ x=-0.1\pm \sqrt{0.01+0.6}\\ r^2=0.463795\\ s^2=0.776205\\ \color{blue}r^2-s^2=-0.31241$$
!
Jan 20, 2022 |
proofpile-shard-0030-99 | {
"provenance": "003.jsonl.gz:100"
} | Volume 282 - 38th International Conference on High Energy Physics (ICHEP2016) - Poster Session
ATLAS SM VH(bb) Run-2 Search
A. Buzatu* on behalf of the ATLAS collaboration
*corresponding author
Full text: pdf
Pre-published on: February 06, 2017
Published on: April 19, 2017
Abstract
The Higgs boson discovered at the LHC in 2012 has been observed coupling directly to $W$ and $Z$ bosons and to $\tau$ leptons, and indirectly to top quarks. In order to probe whether it is indeed the particle predicted by the Standard Model, direct couplings of the Higgs boson to quarks must also be measured. The Higgs boson decays most often to a pair of bottom quarks (with a branching ratio of 58\%). When the Higgs boson is produced alone in gluon-gluon fusion, the signal in this decay mode is overwhelmed by the regular multi-jet background. By requiring the Higgs boson to be produced in association with a vector boson $V$ ($W$ or $Z$), which is further required to decay leptonically, data events can be selected using charged-lepton or missing transverse energy triggers. The Tevatron experiments presented combined results showing evidence for the \vh~process at a significance level of about 3 standard deviations, while the combined LHC results from Run-1 data show a 2.6 standard deviation evidence for the \hbb~decay mode. In this poster, the ATLAS \vh~search using Run-2 data is summarised.
DOI: https://doi.org/10.22323/1.282.0898
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access |
Subsets and Splits