Sabtu, 18 Oktober 2008

Exploration and Geology Techniques

Overview of Exploration

Richard L। Brown

In this chapter a number of authors describe the kinds of geological thought and exploration techniques applicable to the original identification and subsequent mensuration, in terms of tonnage and grade, of mineral deposits judged by the geologist to be suitable for surface mining. Many of these techniques are also applicable to grass roots or systematic reconnaissance style exploration. It is appropriate that there be discussion of reconnaissance techniques in this handbook since that activity in established mining districts often continues long after initial production.
Exploration geology seems to have separated, as a discipline, from mining geology, not because the one group has a greater or less need than the other to know and understand all these techniques as much as that each group has different objectives. The mining geologist seeks new veins or other new extensions to ore bodies and expects to find these on a regular basis, whereas the exploration geologist must find new districts and knows that he will be lucky if he makes one or two such discoveries during a career. The mining geologist assists in the day-to-day problems of production, and reacts to the discipline which accrues to achieving daily and monthly goals. The discipline accruing to the discovery of a new mineral deposit in a new district is of a different sort. At any rate, the separate disciplines of exploration geology and of mining geology merge at that point in the history of a mineral deposit, after discovery while it is being explored and determinations are being made of the tonnage and grade—that is to say, the period in the history of an ore deposit when data necessary for a feasibility study are being prepared. Because so many of the techniques discussed in the following pages apply both to the general reconnaissance-type exploration and to the business of drilling off an ore body, the authors have been rather general in their treatment of the various forms of exploration geology which apply to the commodity they have discussed. In this introductory section, exploration techniques common to most commodities are addressed with the purpose of providing an overview of the duties of the exploration geologist as he takes a prospect from discovery to the final economic estimates.
A review of the technology directed at the search and discovery of economic mineral deposits illustrates once more that there is “nothing new under the sun.” There has been a continuum from medieval times to the present day of man’s knowledge of mineral deposits and of ways of finding them. A quick trip across Europe shows that the Phoenicians, Greeks, and Romans were all adept at reading gossans, at panning heavy minerals from stream sediments, and at a variety of other exploration techniques still used by today’s geologists. It is also obvious that these ancients had more than a rudimentary grasp of many principles of economic geology.
The search for ore begins with the development of ideas as to where the search should be conducted. Application of the most modern geophysics, geochemistry, remote sensing, and other techniques cannot be made until the geologist has decided where the search should begin. The first things a geologist must decide are what the ore body he hopes to find looks like, what minerals are contained therein, and how it was formed. He must, in short, develop an empirical model before he leaves his office and goes to the field.
Most of the ideas, a few examples of which are described below, geologists use now and will use during the foreseeable future had been published by 1974. In March 1965, The Canadian Institute of Mining and Metallurgy held a symposium on volcanogenic deposits. The papers given at that symposium, published by the CIM later that year, form the bible used by most geologists as they plan their exploration for volcanogenic deposits. John Guilbert and David Lowell published their paper on “Mineral Zoning in Porphyry Copper Deposits” in Economic Geology during 1970. The so-called Red Sea book (Hot Brines and Recent Heavy Metal Deposits from the Red Sea, edited by Degans and Ross) was a 1969 publication. The term plate tectonics was firmly in place by 1970, and the Journal of Geophysical Research published its compendium of papers related to that subject in 1973. Kambalda in Western Australia was discovered in 1968, and the recognition that some nickel-copper deposits were derived from ultramafic volcanic rocks was made in print by a number of authors in the very early 1970s. Our knowledge of the so-called Mississippi Valley deposits lags far behind some of the other ore types mentioned above, but the sum of our knowledge of these deposits is pretty much contained in the August 1971 issue of Economic Geology.
The search for volcanogenic ores, many of which are mined from surface, has widened possibly further than any other type of exploration, and it may be well to describe the model which governs much of that exploration. In brief, the geologists who participated in the Canadian symposium in 1965 had noted that descriptions of synvolcanic and syngenetic ores described by German and Japanese workers corresponded closely with the results of our own mapping and observations of the Precambrian deposits in Northern Ontario and Quebec. As a result of this mapping, they were able to demonstrate that many of the Canadian deposits had been formed on ocean floors, apparently from brines derived from highly siliceous rhyolite domes. In addition, iron-rich silicate deposits often spread far from the volcanic dome and were deposited on the sea floor over wide regions. Thus the three elements of the volcanogenic model were: (1) the volcanic dome containing imbricate stringer zones; (2) the polymetallic sulfide deposits, formed on the ocean floor; and (3) widespread cherty pyrite bed. The complete system was usually covered by more recent volcanic rocks, often andesite or basalt. Thus the subsequent direction of massive exploration money at the andesite-rhyolite contacts in many shield areas throughout the world.
It would be difficult to pinpoint the decade during which geologists routinely began to map the distribution of clay alteration products around porphyry copper sulfide systems. A quick glance through bibliographies shows a number of papers on the subject were published in the 1930s. A number of company geologists were mapping such patterns routinely during the early 1950s. A number of papers authored by such people as S.C. Creasy, Richard and Courtright, and Paul Kerr demonstrated widespread interest in the subject during that decade. During the 1960s, Guilbert and Lowell, collectively and individually, published the results of their observations of alteration patterns in the southwest United States and in some other areas as well, and their 1970 paper cited earlier is now regarded by most North American geologists as the standard text on the subject. There is presently very little porphyry copper exploration near established mines or in new districts conducted which does not respond to the Guilbert and Lowell model.
Recently, it has become clear that predictable distribution of minerals containing fluorine, barium, and other elements, occurs around the previously known stacked intrusive complexes which host the molybdenum-porphyry systems. The discovery of molybdenum at Mt. Emmons in Colorado and at Pine Grove, Utah, can be attributed to recognition of this mineralogical distribution. Other important exploration programs, generated by recognition of similar features in other areas in the western United States, are in progress.
However, no such conclusive models have been developed for the Mississippi Valley deposit. While there is an excellent body of literature which describes most of the deposits in the six or seven type localities scattered around the United States and Canada, there is no single body of observations or of theory which is accepted by the majority of the workers in the field and which can be described as a common denominator underlying exploration activity. There is a need for additional work directed at these Mississippi Valley-type deposits.
It would appear that geologists engaged in the exploration of these deposits are paying increased attention to the study of paleosurfaces and paleoecological environments which are dominated by carbonate-rich rocks. Of course, the internal characteristics such as collapse breccias, limestone-dolomite interfaces, and recrystallized dolomitic rocks are recognized and mapped, and trigger intense exploration when they are seen. Possibly there is some consensus that the margins of carbonate platforms are good places to look for these deposits, and in southeastern Missouri, criteria implicit in both the old and new leadbelts are carefully adhered to.
Also, there is no commonly accepted rationale governing exploration for replacement-type polymetallic sulfide occurrences hosted by carbonate rocks. Possibly researchers and explorationists had not been interested in this group of hydrothermal ore deposits because they are relatively rare, and the metal content, dominated as it usually is by lead and zinc, is apt to be relatively low and unremunerative. The Mexican deposits such as Plomsas, Santa Eulalia, Naica, Charcas, Providencia-Concepción del Oro, Taxco, San Martin, La Encantada, Fresnillo, and Velardena are not well known, and the results of significant research, if any has in fact been conducted, are proprietary and locked up in mining company reports. Probably the most utilized lead in the search for these deposits is directed in the vicinity of veins and veinlet systems which can be classified either as feeder-type mineralization of the replacement bodies or as leakage from them.
The search for nickel-copper deposits during the 1950s and 1960s in general contemplated a Sudbury-type model in which it was supposed that a sulfide magma body had been injected from depth to a near surface position by any of a number of proposed mechanisms. The Sudbury, Ontario and the Thompson Lake, Manitoba districts both lie in the join between contiguous provinces of the Canadian Shield, and it has been widely assumed or hoped that additional deposits can be found along these sutures and supposed zones of weakness. Substantial exploration time and dollars have been expended in the search for deposits in these zones, and still continues, although on a much more limited scale. However, as noted earlier, subsequent to the discovery of nickel in the Kambalda district in Western Australia, it was recognized that some nickel-copper deposits are associated with ultramafic volcanic flows. The Travis and Wodell paper, published in the proceedings of the 12th Commonwealth Mining Congress, and A. N. Naldrett’s paper entitled “Nickel Sulfides, Classification and Genesis,” published by the CIM in 1973, each described this association. Nickel exploration has been at a low ebb during the last decade, due to unremunerative prices received by producers for that metal, but such nickel exploration work as does proceed is directed at both magmatic and the volcanically derived sulfides.
During the past 15 years, fair consensus at least regarding the morphology, if not the genesis, of uranium deposits hosted by sandstones and by quartz pebble conglomerates, has been achieved. Exploration, designed to test sulfide-rich as compared to oxide-rich portions of appropriate sandstone and quartz pebble conglomerate units, has developed. However, no such consensus has been reached in respect to the vein-type deposits of northern Saskatchewan or of northern Australia. These deposits appear to be characterized in both locations by high-grade pitchblende veins hosted in crystalline or in metamorphic rocks, covered by Proterazoic sediments in which are bedded carnotite or carnotite-type mineralization. While there is very little agreement amongst geologists as to how these deposits are formed, most organizations involved in the search for these deposits appear first (mainly by means of airborne electromagnetic and radiometric surveys) to search for signs of the bedded material in the overlying sandstones, and then to attempt to search for the veins.
Geologists who concern themselves with exploration, evaluation, and production of the various industrial minerals, as well as those involved in coal, oil shale, tar sands, and other similar materials, are obviously as interested in the genesis and geological environments of these deposits as are the hard mineral geologists. However, in most cases the deposits are huge and relatively easy to find and therefore the difficulty and cost of original discovery have not been as great as they have been in the case of the commodities discussed previously. The major challenge in the case of the industrial minerals, and in the case of the hydrocarbons, has been identifying major volumes of material which conform to various engineering and chemical standards. Therefore these geologists think not so much in terms of origin and empirical models, as they progress in their exploration work, as they do of quality control and engineering parameters.
The importance of these models, of course, is that they provide terms of reference and criteria to the geologist as he decides, for example, whether or not a given district or prospect warrants more expenditure. If the geologist maps a considerable number of features which conform to his model for a given deposit type, he may well decide to recommend drilling or some other form of physical exploration. If, on the other hand, his data shows that few of the elements of his model are present, he may decide that additional expenditure is not necessary or wise. Similarly, the geologist might use his model to tell him in which direction, either laterally or vertically, additional drilling should be planned. If he knows that the features he is mapping are usually found vertically above another feature of economic importance, he may decide to recommend deeper drill holes. The models, above all, give the geologists terms of reference and continuity of information which extend beyond the bore of the drill hole he is considering, and beyond perhaps, the geometry of the ore shoot being investigated.
The discovery by Newmont Mining Company of the Carlin gold deposit a decade or more ago generated a substantial amount of precious metals exploration in the basin and range province of the western United States. The primary exploration technique involved in this search has been the collection of samples, both geochemical and rock samples, which are assayed for gold, and a variety of other elements, such as mercury, which are thought by the geologists involved to be useful “indicator or pathfinder elements.” This search often requires a collection of samples from wide areas, without much geological discrimination. Recently, emphasis has been placed on hot spring environments, the so-called jasperiod environment; the interest in this type of environment being generated by recent discoveries at Alligator Ridge, Nevada, and in the Caldera environment, similar to the one at McDermitt, Nevada, where mercury mineralization has been mined for some time.
In summary, it should be reiterated that as mineral exploration develops and grows more sophisticated, increasing care will be given to the development of the empirical model. The model is simply a generalized amalgam of features of known deposits, the enclosing host rocks, and all the various alteration patterns which are usually attendant to the mineralization. Once the model is produced and agreed upon by everyone involved, the next exploration step is to decide which geological province might provide all the various factors and features in the model. The third step is to identify, through literature searches and through inspection of old exploration records and other geological material, where within the district chosen the various features called for by the model might be found.
Prospectors as well as modern explorationists have always had models in mind. In former years the prospectors looked for signs of direct mineralization in outcrop, and proceeded then to test these outcrops by drilling or other means. The modern explorationist still hopes to find mineralization in outcrop, and on occasion will do so for many years hence. However, increasingly his work will consist of testing models, once he has found areas in the field which conform in most respects to the model.
Mining company managements, increasingly, will have to get used to the idea of drilling concepts or models rather than mineralized outcrops. One management which has already adopted this idea is that of Western Mining, the Australian company. D.W. Haynes, its copper consultant, has explained in his paper entitled “Mining Technology in Mineral Resource Exploration” (published in Proceedings, Third Invitation Symposium on Mineral Resources in Australia, held in Adelaide in October, 1979 by the Australian Academy of Technological Sciences) how Western Mining geologists put together source rock theory with known information regarding the sedimentary basin on the Stuart Shelf to find the Olympic Dam deposit. The Western Mining geologists, Haynes explains, were looking for areas in the sedimentary basin in the state of Southern Australia where sediments similar to those hosting the Zambian Copper Belt might be found in proximity to basaltic rocks. In addition, a preconceived tectonic model was apparently postulated, and a lineament analysis, aided by data derived from Landsat images, was produced. Once the complete model had been settled upon, it was aggressively explored, and the important Olympic Dam copper-uranium discovery was made. Haynes points out that the mineral deposit which was discovered was not precisely the same type as was anticipated and this perhaps provides some food for thought. The Olympic Dam discovery is obviously not the first successful application of modeling. It is, however, one of the better documented cases of successful exploration generally designed to test a model.
Geological reasoning has improved and must improve even more if an adequate rate of discovery of mineral deposits is to be maintained. There has been parallel improvement of various geophysical, geochemical, and remote sensing techniques. Geophysics has progressed from Thomas Edison’s dip needle, successfully used at Sudbury, Ontario, early in this century, to satellite-mounted magnetometers. Geochemistry has progressed from the practice of early Scandinavians, who during the Middle Ages chased mineralized boulders up streams of glacial debris to their source to determinations of 25 or more metals in soil samples. Remote sensing, in its strict sense, has been developed from the day prior to the Second World War when Canadian geologists made interpretations from oblique aerial photographs to today’s interpretations made from enhanced remote sensed images from satellites.
Interpretation of leached outcrop and of gossan, which was started by Augustus Locke and Rowland Blanchard in the 1920s and carried on, among others, by Kenyon Richard and Harold Courtright in the 1940s and 1950s appears to be a dying art, simply because surface mapping of large porphyry copper systems is not now an everyday activity. However, during the 1960s, enhanced evaluation of potential drill targets was made possible by the interpretations of clay alteration patterns combined with that of leached cappings. The gossans in Western Australia are far different from those of the western United States, but an awareness of the technique and a fair ability to apply it to gossans in that country occurred. Later the Australian expertise was transferred to South Africa, and a number of discoveries were made there as well.
Prototypes of much of the geophysical equipment in use today were in field prior to the Second World War, for example as previously referred to, Thomas Edison’s dip needle, with which he discovered or, at least almost discovered, continuations of the nickel-copper ore at Falconbridge in the Sudbury District. Hans Lundberg, using an equipotential method which verged on electromagnetics, made a great discovery at Buchans Newfoundland, in 1926. Technology developed for military purposes during the war was put to work immediately thereafter, principally in the development of electromagnetics and of airborne electromagnetics. Geiger counters, rarities before the war, become commonplace shortly thereafter. In the mid-fifties, relatively trouble-free ground electromagnetics systems were used routinely in the field and torsion spring magnetometers also were developed; these instruments dramatically increased the rate at which readings could be taken. Airborne electromagnetics and magnetics also became routine during the early 1950s. The combined use of these two techniques resulted in a very impressive string of discoveries, mainly in Canada, which continued at least until 1975. In recent years the rate of discoveries by AEM and AM has declined, for a variety of reasons. However, one can wonder if the application of a single technique will ever again result in a list of discoveries such as Thompson Lake, Manitoba; Heath Steele, New Brunswick; Mattagami and Joutel, Quebec; Timmins, Ontario; Sturgeon Lake, Ontario; and Crandon, Wisconsin. This list, while incomplete, represents an extraordinary record of discovery. Also, in the mid-fifties, Dr. Arthur Brandt completed development of the induced polarization method. The Geiger counter largely gave way in the 1960s to the scintillometer. Drill hole logging by regular radiometric methods became commonplace in the search for rail front uranium deposits in sandstones, particularly in the western United States.
The advent of various microelectronic devices has given geophysicists the capability of gathering enormous amounts of data. For example, modern airborne electromagnetic equipment provides as many as six audio frequency channels, two very low frequency electro EM channels, four gamma-ray spectrometer channels, and a magnetometer channel. All 13 channels are recorded on magnetic tape every half second.
The proton magnetometer can now achieve a sensitivity of about one gamma. High sensitivity magnetometers are used for identification of extremely subtle features in areas of very low magnetic relief. Airborne spectrometer surveys, utilizing 49,161 cm3 (3,000 cu in.) crystals are now routine. These crystals yield much higher count rates than has been the case; the differentiation between earth generated radiation and atmospheric radiation is measured separately and identified.
Of course all the additional information collected by the newer equipment has provided enormous challenges for the interpreter of the data. Sulfide sources are easily confused with nonsulfide sources as much these days as previously.
In spite of improvement of the equipment, it has not yet been possible to make significant improvements in the depth penetration of electromagnetics equipment. During the past five years, transient electromagnetic and audiomagnetic telluric systems, the latter utilizing either distant thunderstorms or nearby controlled sources, have come into use. This system reputedly can achieve depth penetration of several hundred meters. Presumably additional research as to the application of these systems to mineral exploration problems will continue.
Additional research and improvement of in-hole electromagnetic and induced polarization systems will continue. Most current interest appears to be directed at fixed source time domain units, fixed source continuous wave multifrequency units, and single frequency moving transmitter-receiver systems.
The broad range of portable gamma-ray detectors employed in airborne radiometric surveys is also adapted to in-hole surveys. It is anticipated that improvements in simple nondiscriminating scintillation counters for channel differential spectrometers will be made.
In summary, geophysicists have been able to make remarkable improvements in the portability and accuracy of their equipment. They have not been able to make substantial improvements in the depth penetration of the equipment because increased depth penetration means increased volumes of rock energized and therefore increased numbers of nonsulfide features which can cause responses and consequently signal noise with concurrent difficulties in interpretation. Because as many mineral deposits remain to be found at depths beyond the reach of present geophysical equipment as have been found in the near surface, we can confidently expect that additional research and development of equipment capable of seeing deeper into the earth will continue.
The basic principles of geochemical prospecting have been known for thousands of years. They were in fact successfully applied by early prospectors who traced visual indications of ore dispersion patterns in rocks, soils, and stream sediments back to bedrock sources. However, it was not until the 1930s that chemical emission spectrographic analytical methods began to permit trace element measurements. The Russians and Scandinavians began to use geochemical exploration techniques, as we know them, prior to the Second World War. Subsequent to the war, an explosion of geochemical exploration activity occurred, permitted by the development of inexpensive rapid colorimetric analytical techniques by the United States Geological Survey. Research activity spread to the United Kingdom and thence to other countries in western Europe during the 1950s. Students of Hawkes, Webb, Bloom, and Warren took the technique into almost every part of the world and many discoveries were made. As surveys were completed during the 1950s and 1960s, the various ways by which elements can become dispersed throughout the secondary environment became fairly well appreciated. During the 1970s startling advances, again made possible by the advent of microelectronics, were made in the analytical end of geochemical exploration. Atomic absorption instruments and techniques were refined and matrix corrections introduced as routine procedure for certain elements. X-ray fluorescence methods were greatly improved. Plasma spectrometry permitted significantly low detection limits for a number of elements. This improvement in analytical quality and reduction has resulted in a marked decrease in the dollar cost of exploration. Computer data handling capability has kept pace with analytical advances. Computerized data plotting as well as univariate and multivariate statistical procedures are widely accessible.
Many have pointed out that the improvement in equipment has not been matched by comparable improvement in the understanding of fundamental geochemical processes. Routine surveys have been laid out and interpreted without regard to the solid body of information which has been collected. Hopefully, future practice will catch up with the theory already in place.
The geologists also have improved hardware at their disposal and increased availability of this equipment should result in an increased rate of discovery. The microprobe is probably the most important of this new equipment. The ability to detect variations in metal content in individual minerals collected from various parts of districts and of mineral deposits will greatly increase the geologist’s ability to predict projections and to site exploratory drilling and other exploration activity. Increased use of the fluid inclusion stage will result in increased knowledge of fluid inclusions and of hydrothermal fluid temperature, pressure, and composition which is not unknown in respect of many mineral deposits. Collection of this fundamental knowledge will result in better definition of mineral zoning patterns and will, as a result, guide exploration. Fluid inclusion research will become a routine part of modern mineral exploration.
The microprobe will make it possible to determine phase compositions of sulfide-silicate vein and wall rock assemblages. These studies also will assist in determinations of deposit zoning and increase reliable predictability of exploration parameters. Sulfur isotope studies will also become routine and will assist in the classification of ores and the placing of these ores in appropriate models.
Application of nearly all the techniques listed in the previous pages will result in the production of enormous amounts of data. At present, geologists use computers to store data and make various calculations as to tonnage, metal content, and economic viability of mineral deposits, but they have not found a way to utilize the deductive capabilities of computers to find deposits. Possibly they never will, as the human brain is still the best computer of all. However, it is obvious that increased research as to the application of computer techniques to exploration will continue.
In due course geologists will be able to make much better use of images produced by sensors placed in satellites than they now do. Much of the science and technology required to produce images which will identify various rock types now exists. These sensors have been flown from time to time in U-2-type aircraft, and experimental surveys such as famous ones over Saindoc, Pakistan, and Goldfields, Nevada, as well as the various case histories flown by a GEOSAT-NASA joint experiment, provide impressive data. It is clear that the potential impact on mineral exploration of remotely sensed data will be significant.
The ultimate exploration tool is still the diamond drill. We have seen substantial improvement in the reliability, portability and, most important, the percentage of core recovery achievable by drilling equipment in the past few years. The retrievable core barrel has been a great cost saver. However, costs of drilling in the past decade have increased drastically, and the mining business needs, and needs soon, additional improvement. Probably the next routine improvement will be the retrievable bit. If some way can be found to change drill bits without removing an entire string of drill rods, important savings will be achieved. It is certainly to be hoped that the drilling industry will continue its search for ways in which it can keep the costs of diamond drilling under better control than is now the case.
Both the exploration geologist and the mine geologist must at all times have a reasonably accurate perception of the economics of the project. It goes without saying that this perception of economics should apply at all stages of a project and will be applicable initially when a grassroots or reconnaissance program is devised. If a discovery is made in a new district, even during the very early stages of the assessment of the prospect, the geologist must make a back-of-the-envelope calculation designed to demonstrate that under prevailing and under forecast economic conditions, presuming that all assumptions as to tonnage and grade materialize, the mineral deposit he is modeling, will yield a suitable return on investment. During the early stages of such a project, there are few hard facts and many assumptions. As first order drilling, designed to determine the outlines of the deposit continues, these assumptions are replaced by facts, and estimates become more reliable.
A second occasion for an economic review of the project might reasonably occur when plans are made for the expensive closely spaced drilling required to determine final tonnage and grade figures. This estimate, of course, will utilize many assumptions but substantial data will have been provided by the drilling completed. A crude estimate of tonnage and grade will be available, metallurgical data (supplied by bench tests on material from drill cores) will be on hand, assumptions concerning mining costs will be made possible, if only by comparison with other similar operations. Sufficiently accurate assumptions concerning the amount and cost of infrastructure can be made, and the cost of any necessary access roads and other similar items can be reasonably estimated. Presumably this estimate will be made by a team of construction and mining engineers, assisted by the geologist who should be on hand to provide data and interpretation of data concerning the nature and characteristics of the ore deposit, including among other things, continuity of grade, mineralogical zoning, and varying characteristics of host rock type.
There are, of course, various levels of economic estimates, leading from the first preliminary calculations to the full-scale detailed estimates as to the cost of a given project. In Fig. 2.1.1, various levels of economic estimates are defined. Probably, for reasons which will be explained later, the exploration geologist should be substantially involved during the early planning of a project, at least in those phases which involve mine design and planning. In the later design and planning stages, which involve mainly detailed estimates of construction, the geologist will be much less involved, but should in any case be available to the design and planning team until the mine opens.
Figure 2.1.1.
If the results of the feasibility study described in the preceding paragraphs should indicate that a deposit will be financially attractive (should assumptions concerning both tonnage and grade be later confirmed), the next obvious step is to cost out the kind of detailed drilling program necessary to generate accurate ore reserve estimates. The most important consideration in this regard is drill hole spacing. This is a critical matter, as the cost of the drilling can be (even in, say, the case of only a medium-sized porphyry copper deposit) on the order of $10 million or even substantially more. However, as the preproduction expenditure in the case of such a deposit would run into hundreds of millions of dollars, it can easily be recognized that the cost of diamond drilling and sampling of drill results is a very poor place to try to save money. On the other hand, the geologist does have a responsibility to recommend a program which will adequately sample, but not overdrill, the prospect. We would all like to think that it should be possible through the application of some statistical formula or other to precisely establish optimum drilling spacings. Sadly, this is not the case. In many districts long practice may have established the proper drill hole interval, but many of the mineral deposits being discovered during the 1980s are in new districts, and some involve types of mineral deposits for which no precedents are available. Obviously the first matter to be considered in reaching such a decision is a variation of the metal content from hole to hole in the drilling already completed. If there is a small variation, then obviously the drill spacing can be much larger than in the case where there has been large variation of metal content.
Another matter to which the geologist must give his most careful consideration is sample size, or the length of core which will be included in each sample to be sent for assay. Assay costs in case of even a medium-sized deposit will be astonishingly large, and if, for example, the decision is reached to sample every 3 m (10 ft) run of core, rather than each 1.5 m (5 ft) run of core, these very large numbers can be halved. Here again, the optimum choice will result in the least expensive alternative which will completely achieve the required results.
Other decisions, such as staffing, choice of assay office facilities, core storage, provision for constant, high-caliber geological input, and interpretation of data on a current basis are all highly important matters to be costed into the final total budget. There is temptation to scrimp on all of the above-listed matters, but inadequate attention to any of them can in the long run be very costly.
Aside from making the proper arrangements as to drilling intervals, sampling intervals, standards of assaying and the like, there are many other matters which need to be considered. Large volumes of core will be coming in from the drill sites each day. Good facilities will be needed for logging, photographing, and storage of the core; splitting or sawing, and preparing the samples for assay is no small job. Literally hundreds of assays will be returned and of course these assays will need to be plotted and posted on the appropriate maps and sections on a current basis.
As a result of the necessity to perform all the tasks listed on a current basis, there is also need to provide for housing and eating facilities for as many as 30 or 40 people. Supervision must be provided not only for the geological end of the job but also for the logistical end. It is important to reiterate that the final authorization for expenditure should be constructed from costs carefully computed from realistic assumptions, and the recommended procedures should be completed as inexpensively as possible. However, reductions in costs that result in the production of poor samples and poor interpretation can lead, at the very least, to severe cost overruns of the finally completed exploration job. And worst of all, if, as occasionally has happened, it is discovered after the mine has been opened that the ore “was not there,” the entire preproduction cost has been wasted.
Obviously, the end result of the entire exercise is the production of an ore reserve. The geologist in charge of the project will ultimately have to certify the reserves since the success or failure of the entire mining project is, in the last analysis, dependent upon them. Obviously there are, in the case of any modern mining organization, a number of checks and the exploration geologist will not stand alone in this exercise. However, some one individual eventually is required to sign-off on the ore reserve. There are two kinds of numbers which will be generated and the meaning of these two following definitions should not be confused. The first is the mineral inventory. The mineral inventory simply is a listing of the pounds of metal confined within a given volumetric limit. This reserve may be stated without reference to the cost of the extraction. The second number, the ore reserve estimate, it best described by reflecting that ore is often defined as “naturally occurring materials which can be removed from the ground at a financial profit.” This estimate must be made in connection with determinations of mining plans, the geometry of the ore itself and of internal waste, and obviously in reference to forecast metal prices.
Of course, the ore reserve estimate is the prime objective of detailed drilling. Secondary objectives include provision of material for metallurgical testing, rock mechanics information, and of course, the best possible determination of the geology. The determination of tonnage is a fairly routine arithmetic calculation. Practice varies from organization to organization, but it is common to make a measurement of the specific gravity of each run of core. If the metric system is in use, the volume of material expressed in cubic meters is simply multiplied by the specific gravity and the tonnage thereby determined. If the English system is in use, a cubage factor must be determined, and the volume of rock, as expressed in cubic feet, divided by this factor.
Determination of grade is a much more complicated matter. Corporate policy may dictate the method to be used, regardless of the type of mineral deposit under study. In the vast majority of operating mines in the United States, a simple polygon system is employed. Under this method of ore reserve calculation, the volume of ore closest to a sample point is assigned the grade of that sample. Efforts are made to design polygons so that the division point between adjoining sample points is more or less equidistant between them. The polygon technique has served many mines well over many years, but it sometimes fails to adequately predict trends within ore bodies which may, in fact, greatly influence grade. If such a condition should be suspected, it might be well to consider some form of moving average method similar to that devised by Dr. J.G. Krieg, the famous South African geologist and statistician. Under this technique, it is possible to assign a value of metal content in an area where there is no sample point, by considering the values of all the existing adjoining sample points. Sometimes, because all adjacent sample points are considered in calculating the grade of a block, some blocks are assigned higher values than are indicated by a sample point within the block. In extreme circumstances, ore can be extrapolated through a block in which there is a blank drill hole. Some engineers have extreme difficulty accepting a mathematical or statistical estimate which will draw ore grade contours through a blank drill hole. Regardless of their discomfort, Krieg’s method or trend surface analyses may be the only way to accurately determine grade.
As previously noted, the geologist must remember in addition to the ore reserve data he must produce, he must also produce data which will aid in such matters as pit design and predictions of wall stability.
It is apparent from the foregoing that the exploration and mining geologist, to be really good at his job, must be expert in a wide number of fields। He must know geology and must be current in new geological understanding. He must be a good pragmatic prospector, must understand basic finance and economics, must be good at logistics in order to organize complete drilling and other kinds of exploration projects under his direction. In the following sections various authors describe in greater detail those factors which are important in the geological and exploration work concerned with deposits containing various important minerals and commodities.

source : http://books.smenet.org/Surf_Min_2ndEd/sm-ch02-sc01-ss00-bod.cfm#1

Kamis, 16 Oktober 2008

Open Pit Optimization

INTRODUCTION

Computer hardware, and to a lesser extent software, has for the last 20 years consistently advanced at a rate which has exceeded all expectations. As a result, calculations which were difficult or impossible to do only a few years ago can now easily be completed on a computer small enough to fit on a desk and costing only a few months’ salary. What is more, the calculations can be done by users with very little knowledge of computers.

Pit optimization is a field which has benefited greatly from this process in recent years, and we can now go far beyond simple optimization of a pit outline. Thorough sensitivity work, which has often only received lip service in the past, can now be carried out routinely on every ore body that is examined. Management can be offered the real possibility of trading profit for reduced corporate risk in an explicit manner.

Pit optimization was touched upon briefly in the previous section, but we will now go into it in much more detail and describe what can be done at the time of writing (early 1990). There will undoubtedly be further developments.

THE MEANING OF PIT OPTIMIZATION

The first thing to realize is that any feasible pit outline has a dollar value which can, in theory, be calculated.

By feasible, here, we mean that no wall slope is steeper than the rock can support after allowing for the insertion of haul roads and safety berms. That is, we are talking about overall pit slopes.

To calculate the dollar value we must decide on a mining sequence and then conceptually mine out the pit, progressively accumulating the revenues and costs as we go. If we wish to allow for the time value of money—that is the fact that a dollar we receive today is more valuable than one that we (might) receive next year—then we must discount the revenues and costs by a factor which increases with time.

The second thing to realize is that in doing this calculation we have, in effect, allocated a value to every cubic meter or to every block of rock. What is more, we have allocated these values without taking any account of the mining which has gone before, except that the value may depend on the position of the block and the effect that its position has on haulage distances.

Current computer optimization techniques attempt to find the feasible pit outline which has the maximum total dollar value. The good ones guarantee that there is no single block or combination of blocks which can be added to or subtracted from the outline to produce an increase in total outline value. That is, they guarantee the absolute mathematical maximum. They also exclude any block combinations which have a zero value.

Once we have fixed the block values and the slopes, we have fixed the optimal outline, and it is important to make the point that there is only one optimal outline. If we assume that there are two outlines of the same value, then it is easy to show that the two taken together would produce an outline of higher value. Consequently the assumption of the existence of two different optimal outlines of equal value is false.

If the block values increase then, in general, the optimal pit gets bigger. If the slopes increase then, in general, the optimal pit gets deeper.

Of course, we have to know the pit outline in order to calculate the values of the blocks, particularly if the time value of money is important. Conversely, we have to know the block values in order to find the optimal outline. We therefore have a chicken and egg situation, and we will return to this.

A SIMPLE EXAMPLE

Let us assume that we have a flat topography and a vertical rectangular ore body of constant grade as is shown in Fig. 5.3.1. Let us further assume that the ore body is sufficiently long in strike for end effects to be ignored. Under these circumstances, we only have to concern ourselves with a section.

Figure 5.3.1.

In this simplified case there are eight possible pit outlines that we can consider, and the tonnages for these outlines are given in Table 5.3.1.

If we assume that ore is worth $2.00 per tonne after all mining and processing costs have been paid, and that waste costs $1.00 per tonne to remove, then we obtain the values shown in Table 5.3.2 for the possible pit outlines.

When plotted against pit tonnage, these values produce the graph in Fig. 5.3.2. With these very simple assumptions the outline with the highest value is number five.

Figure 5.3.2.

There are other things that we can learn from this curve.

Firstly, outlines four and six have values which are close to that of outline five, and this is not just an artefact of this particular ore body. For any continuous ore body, as the pit is expanded towards optimality, the last shell which is added will have only a small positive value. If it had a large one, there would probably be another positive shell to follow. This means that in this case, and in the vast majority of real ore bodies, the curve of value against tonnage is smooth and surprisingly flat at the peak. It is common to find that a 10% range of pit tonnage covers only a 1% range of pit value. The trick is to find the peak, and good optimizers guarantee to do this.

Secondly, consider Fig. 5.3.3. If we are working without an optimizer and doing a detailed design for a realistically complex ore body, then we might be working away from the peak at ‘A,’ where changes in pit tonnage can have a significant effect on the value of the pit. In fact, generations of mining engineers have learned that a series of small adjustments, involving a great deal of work, can significantly affect the profitability of the mine. Contrast this with starting from an optimized outline at ‘B.’ From this point, providing that ore and waste are kept in step with each other, it is difficult to go wrong. Certainly there is no need to experiment with small adjustments. Since, with modern software, we can plot this graph for real ore bodies, we can actually find out how much freedom of movement we have before we start the detailed design. In other words, designs based on optimized outlines are very much easier to do.

Figure 5.3.3.

THE EFFECTS OF SCHEDULING ON THE OPTIMAL OUTLINE

When we schedule a pit, we plan the sequence in which various parts of it will be mined and the time interval in which each is to be mined. This affects the value of the mine because it determines when various items of revenue and expenditure will occur. This is important because the dollar we have today is more valuable to us than the dollar that we are going to receive or spend in a year’s time. There are various reasons for this:

  • Delayed revenue may increase our need to borrow funds and pay interest, thus reducing the effective revenue;
  • Delayed revenue may not eventuate—one of the risk factors;
  • Delayed expenditure may reduce our need to borrow funds and pay interest, thus reducing the effective expenditure;
  • Something unexpected may go wrong with the operation—another risk factor; etc.

The standard way to allow for this is to discount next year’s dollar by a certain percentage and to apply that idea cumulatively into the future. Thus we discount future revenues and costs by a particular discount rate and reduce them all to a net present value.

There are two discount rates. The notional discount rate is applied to actual revenues and costs which are likely to occur. That is, revenues and costs which follow the inflation rate. Thus the notional rate (typically 20%) includes an allowance for inflation. It is correct to use this, provided that we inflate our revenues and costs for future years. However, we are then in the position of guessing at the future inflation rate and then guessing at a figure to correct for it! It is easier to work out revenues and costs in today’s dollars and then to use the real discount rate (typically 10%), which does not allow for inflation.

In what we will call worst case mining, each bench is mined completely before the next bench is started. Waste at the top of the outer shells is mined early, and the cost is discounted less than the revenue from the corresponding ore which is mined much later. This can make the outer shells uneconomic. The optimal pit for worst case mining is thus generally smaller than is indicated by simple optimization using today’s costs and revenues. This can easily be seen by referring to Fig. 5.3.1.

In what we will call best case mining, each shell is mined in turn and thus the related ore and waste is mined in approximately the same time period. In this case, the optimal pit is usually close to the one obtained by simple optimization. Unfortunately, if we try to mine each shell separately, mining costs usually increase and cancel out some of the gains.

In small pits, worst case mining may be the only possibility. The larger the pit, the more opportunity there is for creative sequencing, and the closer it is possible to get to best case mining.

PRODUCTION OF A DETAILED DESIGN FROM AN OPTIMAL OUTLINE

The precise method used in creating a detailed pit design depends on the tools which are available. It may be done entirely by hand, or with varying degrees of computer assistance.

Whatever the method, the aim is to produce a detailed design which deviates as little as possible from the outline provided by optimization. Where deviation is unavoidable, we try to balance extra tonnage in one place with reduced tonnage in another. The resultant design should in most cases contain ore and waste tonnages very similar to those contained by the optimal outline. If it is not possible to achieve this, then it may be that the slopes were not set correctly for the optimization. For example, insufficient allowance may have been made for the effect of haul roads.

While all reasonable steps should be made to follow the optimal outline, the shape of the graph shown in Fig. 5.3.2 should be borne in mind. Provided that waste is not included without the ore which it uncovers, small deviations from the outline have little or no effect on the pit value. A useful concept is to say that the spirit of the outline should be followed rather than the detail. Certainly the square edges of the blocks on the outer surface of the outline are irrelevant. As a starting point, a smooth line should be drawn through them as is shown in Fig. 5.3.4. Remember that the block edges are artefacts, they do not represent geological or grade boundaries.

Figure 5.3.4.

The achievement of the necessary minimum mining widths at the bottom of the pit is often cited as a problem with pit optimization. This problem is more apparent than real in that, for large disseminated or near horizontal ore bodies, the necessary adjustments at the bottom of the pit are usually easy, whereas, for steeply dipping reef structures, it may be possible to put extra constraints into the optimization so as to ensure the necessary width. In the remaining cases, some loss of pit value will be involved in adjusting the bottom of the pit, but it should never exceed 1 or 2%.

THE AVAILABLE OPTIMIZATION METHODS

All currently available methods of optimization attempt to find the optimal outline in terms of a block model. That is, they try to find the list of blocks which has the maximum total value while still obeying the slope constraints.

The enormity of this problem is seldom appreciated.

Trial and Error

Consider a trivial model with only one section and 10 benches of 10 blocks. If we take a very simple-minded approach, each of the 100 blocks can either be mined or not, so there are 2100 or 1030 alternatives, many of them not feasible. Even if a computer could assess a million alternatives a second, it would still take three million times the current age of the universe to find the best one!

If the allowable slope is one block up or down at each column change, and we use this information to ensure that we try only feasible alternatives, the number of alternatives is reduced to 10 × 39 or 200,000. A computer could easily assess this number of alternatives. However, if we extend the model to 10 sections, the number of alternatives rises to 10 × 299 or about 1030 again, and we still have only 1,000 blocks, which is insufficient for serious work.

Put simply, trial and error is useless.

Floating Cone

The floating cone method has been popular because it is easy to program and easy to understand. It works by searching through the block model for ore blocks and then assessing the value of the inverted cones which have to be mined to expose them. If the value of a cone is positive, it is mined out and all the blocks it includes are changed to air blocks. The search then continues.

Unfortunately, this simple-minded approach rarely finds the optimal pit because of two distinct problems; one causes it to omit profitable ore from the pit and the other causes it to include non-profitable ore.

The first occurs because it cannot try all possible combinations of ore blocks, as that would be a trial and error process, and we have seen that that is computationally unreasonable. Most pits are viable in part at least because numbers of ore blocks combine to pay for the stripping of waste above them, when no individual block or even close group of blocks can do so. The floating cone method cannot detect this co-operation between different parts of the ore body if neither part is viable in its own right.

The second occurs for slightly more technical reasons. In Fig. 5.3.5 there are three small ore bodies and their corresponding waste volumes, with their values and costs shown. A floating cone program will examine A and will find that the corresponding cone has a total value of (40 - 20 - 30) = -10, and so is not worth mining. It will then examine B, will find a cone of value (200 - 80 - 30) = +90 and will convert it to air, leaving the values shown in Fig. 5.3.6

Figure 5.3.5.

Figure 5.3.6.

If a floating cone program is to work correctly, whenever it converts a cone to air, it should start searching again at the top of the model. However, this is computationally very expensive so that most programs continue their search downwards and would consider C next.

At this time the cone for C has a total value of (40 - 50 + 40 - 20) = +10, so that the program mines it. This should not happen, because some of the value of ore body A is being used to help pay for the mining of waste (the -50 region) which is below it. The true optimal pit in this case includes A and B, but not C.

Apart from being easy to understand and program, the one advantage that the floating cone method has over other methods is that, if instead of using just one block the program uses a disk of blocks as its starting point, then this can ensure a particular minimum mining width at the bottom of the pit.

Two-Dimensional Lerchs-Grossmann Method

In 1965 Lerchs and Grossmann gave two different methods for open pit optimization in the same paper. One works on a single section at a time. It only handles slopes which are one block up or down and one across, so that the block proportions have to be chosen so as to create the required slopes. This method is easy to program and is reliable in what it does, but, since sections are optimized independently, there is no guarantee that successive sections can be joined up in a feasible manner. Consequently a good deal of manual adjustment is usually required to produce a detailed design. The end result is erratic and unlikely to be truly optimal.

Two later variants of this method exist. One (Johnson, Sharp, 1971) uses the two-dimensional method both along sections and across them, in an attempt to join them up. The other (Koenigsberg, 1982) uses a similar idea but works in both directions at once. Both are restricted to slopes which are defined by the block proportions and neither honors even these slopes at 45ྻ to section. This last point is best illustrated by running the programs on a model which contains only one (very valuable) ore block. The resulting pit is diamond shaped rather than circular, with slopes correct in the E-W and N-S directions, but much too steep in between.

Three-Dimensional Lerchs-Grossmann and Network Flow

The second method given by Lerchs and Grossmann (1965) was based on a graph theory method, and Johnson (1968) published a network flow method of optimizing a pit. Both guarantee to find the optimum in three dimensions regardless of block proportions. Both, naturally, give the same result.

Both are difficult to program for a production environment where there are large numbers of blocks. Nevertheless this has been achieved and programs are now available which can run on any computer from a PC upwards. Most of these use the Lerchs-Grossmann method.

Because these programs guarantee to find the sub-set of blocks with the absolute maximum value consistent with the slope constraints, the alterations to the pit outline caused by small slope or block value changes are reliable indicators of the effect of such changes. This has opened up the field of real sensitivity analysis, where the effects of slope, price and cost changes can be measured accurately. With other methods, only the crudest sensitivity work is possible.

This has led to the development of programs which automate some aspects of sensitivity analysis to the point where graphs of net present value against, say, total pit tonnage, can easily be plotted. Further mention of this will be made later.

CALCULATING BLOCK VALUES

The correct calculation of block values is essential for any optimization. If the block values are wrong, the optimized pit outline will also be wrong.

For optimization purposes, there are two basic rules which must be followed when calculating the value of a block.

The First Rule

Calculate the block value on the assumption that it HAS been uncovered and that it WILL be mined.

No allowance for assumed stripping ratios should be made, because stripping is precisely what pit optimization works out. If a stripping ratio is assumed when calculating the block values, the result of the optimization is being prejudged.

Similarly, take no notice of any pre-conceived breakeven cutoff. The use of a breakeven cutoff can be helpful in manual pit design; it is inappropriate for optimized pit design. A consequence of this is that a block model in which only rock containing grades above a breakeven cutoff is designated as ore, is also inappropriate for pit optimization.

The only relevant cutoff in this context is that grade at which the revenue from recovered product will just pay for the cost of processing and any extra mining cost which is only applicable to ore.

Second Rule

Include any on-going cost which would stop if mining were stopped.

This is because, when the optimization program is adding a block to the pit outline, it is effectively extending the life of the mine. It must therefore pay for all the costs involved in extending the life of the mine.

Incremental costs such as fuel costs, wages, etc. must obviously be included in the cost of mining or processing, whichever is involved.

Overhead costs WHICH WILL STOP IF MINING STOPS must also be included. If the mine throughput is to be limited by the overall mining capacity, then these overheads should be included in the mining costs. If the throughput is to be limited by the processing capacity, then these overheads should be included in the processing cost, because only the addition of an ore block extends the life of the mine.

Nonrecoverable upfront costs, such as the cost of building access roads, should not be included in the costs used in optimization. Although these may be paid for with a loan which is to be repaid over a number of years, these repayments will be required whether mining continues or not. If the value of the optimized pit is less than the nonrecoverable upfront costs, then the mine should not be proceeded with.

BLOCK SIZES

There are four block sizes which are relevant in this work.

For Outlining the Ore Body

The size of the block that is needed for outlining the ore body depends on the shape and size of the ore body and on the particular computer modeling package that is being used. It may be quite small, which can lead to a model consisting of millions of blocks.

For Calculating Block Values

The value of blocks should be calculated with a block size which is similar to the selective mining size. That is, a parcel of rock should not be so small that it could not be mined separately, nor so large that grades are artificially smoothed. This block is sometimes bigger than that needed for outlining the ore body, requiring blocks to be combined and their grades averaged.

For Designing a Pit

There is now considerable experience in pit design using optimization techniques and, assuming that the pit occupies most of the width and length of the model and that the outline is not too convoluted, then a full model of 100,000 to 200,000 blocks is usually more than sufficient for pit design purposes. This leads to a block size which may be bigger than that for calculating values.

If it is necessary to re-block the value model, then it should be done by adding component block values and NOT by averaging grades.

For Sensitivity Work

If we want to do a series of optimizations using, say, different product prices so as to plot a graph of pit value against price, a model of 20,000 to 50,000 blocks will give just the same shape of graph with a very small shift of absolute value. Thus, most optimizations for sensitivity work can be done very quickly and this approach generally leads to a much more thorough sensitivity analysis.

Again, re-blocking should be done by adding values and not by averaging grades.

SENSITIVITY WORK

Although an optimized block outline and the corresponding detailed design are not the same, they do have a close relationship and, provided a good optimizer is used, are very similar in value. Consequently, when comparing two designs, the difference in value between the two optimal block outlines will be very similar to the difference in value between the two detailed designs. This means that sensitivity work can be carried out without doing any detailed designs at all.

Also, because a good optimizer produces a result which is objective and single-valued, it is quite reasonable to take note of small value differences due to, say, changing the slopes by a few degrees. This is not true when designs are done by hand, because an engineer will probably produce different designs on different days, without any change of slope.

During sensitivity work, we explore the economic and slope sensitivity of the mine. We sort out the general scale of mining and hence the operating costs. We decide approximately where the haul roads are to go and adjust the slopes in these regions to the average slope.

This requires a large number of quick optimization runs. However, it is probably the most valuable part of the whole design exercise because it inevitably leads to a much better understanding of the ore body and its economics. Graphs can be prepared which show how various characteristics of the mine, such as value or tonnage, are related to product price, costs, etc.

Probably the most significant graph is the one shown in Fig. 5.3.7. This relates net present value (NPV) to total pit tonnage for a given throughput and product price.

Figure 5.3.7.

First, a set of optimal outlines is prepared, where each is optimal for a different product price. For some fixed product price, each of the outlines is then scheduled as though it was to be the limiting pit. If an automated practical scheduling scheme is available, it should be used. In producing Fig. 5.3.7, two limiting schedules have been used. Best case scheduling involves mining with many small pushbacks or cutbacks. Although in no sense a practical schedule, it indicates the highest possible NPV. Worst case scheduling involves completing the mining of each bench before starting the next. This is usually practical, but produces the lowest possible NPV.

The NPV for any practical mining schedule must lie somewhere between the two lower curves, with smaller pits tending towards the bottom curve and larger pits providing opportunities to get nearer to the middle curve.

This graph, which can be plotted for different product prices, is the single-most useful presentation known to the writer. It is meaningful to engineers, accountants, and management alike and can usefully be discussed in committee. It allows profit and corporate risk, in the form of mine life (pit tonnage), to be related and traded explicitly. Once a pit size has been chosen, it is easy to use the corresponding pit outline as a starting point for the detailed design.

This graph can be prepared by using any good optimizer and by doing a lot of work. However, software now exists which will produce the data for it automatically and quickly.

CONCLUSION

We have seen how good pit optimizers can be used not only to help design ultimate pit outlines, but also to carry out sensitivity analysis to an extent which is not possible without them.

Pit optimization is a tool which, used properly, can greatly speed and ease the process of pit design and can significantly increase the value of most pits. It can also be used to reduce the corporate risk involved in mining.

REFERENCE LIST

Johnson, T.B., 1968, “Optimum Open Pit Mine Scheduling,” Ph.D. Diss. University of California, Berkeley, CA, 120 pp.

Johnson, T.B., and Sharpe, R.W., 1971, “Three Dimensional Dynamic Programming Method for Optimal Ultimate Pit Design,” Report of Investigation 7553, US Bureau of Mines.

Koenigsberg, E., 1982, “The Optimum Contours of an Open Pit Mine: An Application of Dynamic Programming,” Proceedings, 17th APCOM Symposium, AIME, New York, pp. 274–287.

Lerchs, H., and Grossmann, I.F., 1965, “Optimum Design of Open Pit Mines,” CIM Bulletin, Canadian Institute of Mining and Metallurgy, Vol. 58, January.


source : http://books.smenet.org/Surf_Min_2ndEd/sm-ch05-sc03-ss00-bod.cfm

Ultimate Pit Definition

INTRODUCTION

There are probably as many ways of designing an ultimate open pit as there are engineers doing the design work. The methods differ by the size of the deposit, the quantity and quality of the data, the availability of computer assistance, and the assumptions of the engineer.

As the first step for long or short-range planning, the limits of the open pit must be set. The limits define the amount of ore minable, the metal content, and the associated amount of waste to be moved during the life of the operation. The size, geometry, and location of the ultimate pit are important in planning tailings areas, waste dumps, access roads, concentrating plants, and all other surface facilities. Knowledge gained from designing the ultimate pit also aids in guiding future exploration work.

In designing the ultimate pit, the engineer will assign values to the physical and economic parameters discussed in the previous section. The ultimate pit limit will represent the maximum boundary of all material meeting these criteria. The material contained in the pit will meet two objectives.

  1. A block will not be mined unless it can pay all costs for its mining, processing, and marketing and for stripping the waste above the block.
  2. For conservation of resources, any block meeting the first objective will be included in the pit.

The result of these objectives is the design that will maximize the total profit of the pit based on the physical and economic parameters used. As these parameters change in the future, the pit design may also change. Because the values of the parameters are not uniquely known at the time of design, the engineer may wish to design the pit for a range of values to determine the most important factors and their effect on the ultimate pit limit.

MANUAL DESIGN

The manual method of designing pits involves considerable time and judgment on the part of the engineer. The usual method of manual design starts with the three types of vertical sections shown in Fig. 5.2.1:

Figure 5.2.1.

  1. Cross sections spaced at regular intervals parallel to each other and normal to the long axis of the ore body. These will provide most of the pit definition and may number from 10 to perhaps 30, depending on the size and shape of the deposit and on the information available.
  2. A longitudinal section along the long axis of the ore body to help define the pit limits at the ends of the ore body.
  3. Radial sections to help define the pit limits at the ends of the ore body.

Each section should show ore grades, surface topography, geology (if needed to set the pit limits), structural controls (if needed to set the pit limits), and any other information that will limit the pit (e.g., ownership boundaries).

The stripping ratio is used to set the pit limits on each section. The pit limits are placed on each section independently using the proper pit slope angle.

The pit limits are placed on the section at a point where the grade of ore can pay for mining the waste above it. When a line for the pit limit has been drawn on the section, the grade of the ore along the line is calculated and the lengths of the ore and waste are measured. The ratio of the waste and ore is calculated and compared to the breakeven stripping ratio for the grade of ore along the pit limit. If the calculated stripping ratio is less than the allowable stripping ratio, the pit limit is expanded. If the calculated stripping ratio is greater, the pit limit is contracted. This process continues on the section until the pit limit is set at a point where the calculated and breakeven stripping ratios are equal.

In Fig. 5.2.2, the grade on the right side of the pit was estimated to be 0.6% Cu. At a price of $2.25 per kg of copper, the breakeven stripping ratio from Fig. 5.2.3 is 1.3:1. The line for the pit limit was found using the required pit slope and located at the point that gave a waste:ore ratio of 1.3:1. At the limit

Figure 5.2.2.

Figure 5.2.3.

On the left side of the section, the pit limit for the 0.7% Cu grade was similarly determined using a breakeven stripping ratio of 1.7:1. If the grade of the ore changed as the pit limit line was moved, the breakeven stripping ratio to use would also change.

The pit limits are established on the longitudinal section in the same manner with the same stripping ratio curves. The pit limits for the radial section are handled with a different stripping ratio curve, however. As shown in Fig. 5.2.4, the cross sections and the longitudinal section represent a slice along the pit wall with the base the same length as the surface intercept. The radial section represents a narrow portion of the pit at the base and a much wider portion at the surface intercept. The allowable stripping ratios must be adjusted downward for the radial sections before the pit limit can be set.

Figure 5.2.4.

The next step in the manual design is to transfer the pit limits from each section to a single plan map of the deposit. The elevation and location of the pit bottom and the surface intercepts from each section are transferred. If a pit slope change occurred on a section, its position is also transferred.

The resultant plan map will show a very irregular pattern of the elevation and outline of the pit bottom and of the surface intercepts. The bottom must be manually smoothed to conform to the section information.

Starting with the smoothed pit bottom, the engineer will develop the outline for each bench at the point midway between the bench toe and crest. The engineer manually expands the pit from the bottom with the following criteria:

  1. The breakeven stripping ratios for adjacent sections may need to be averaged.
  2. The allowable pit slopes must be obeyed. If the road system is designed at the same time, the interramp angle is used. If the preliminary design does not show the roads, the outline for the bench midpoints will be based on the flatter overall pit slope that allows for roads.
  3. Possible unstable patterns in the pit should be avoided. These would include any bulges into the pit.
  4. Simple geometric patterns on each bench make the designing easier.

When the pit plan has been developed, the results should be reviewed to determine if the breakeven stripping ratios have been satisfied. The pit can be divided into sectors on the pit plan and each sector checked for the waste:ore ratio. Two ways the stripping ratios for each sector can be checked are:

  1. The pit limits from the pit plan maps can be transferred back to the sections and the stripping ratio can then be calculated from the sections.
  2. The bench outlines can be transferred to each individual bench map. The ore and waste lengths are measured along the bench outline for each sector. The results for each bench are combined to calculate the stripping ratio for that sector. The ore grade for the sector is the weighted average (by length) of the grade of the ore along the pit limit for each bench.

The total reserves for the pit and the average stripping ratio are determined by accumulating the values from each bench. On each bench the ore tonnes above the breakeven cutoff grade are measured and the average grade of the ore is calculated. The tonnes of waste are also measured. The total of the tonnes of ore and the total of the tonnes of waste on each bench give the average stripping ratio for the pit.

COMPUTER METHODS

As should be appreciated, the manual design of a pit gets the planning engineer closely involved with the design and increases the engineer’s knowledge of the deposit. The procedure is cumbersome, though, and is difficult to use on large or complex deposits. Because of the lengthiness of the procedure, the number of alternatives that can be examined is limited. As more information is gathered or if any of the design parameters change, the entire process may have to be repeated. Another drawback to the method of manual design is that the pit may be well designed on each section, but, when the sections are joined and the pit is smoothed, the result may not yield the best overall pit.

The growth of computer usage has allowed engineers to handle greater amounts of data and to examine more pit alternatives than with manual methods. The computer has proved to be an excellent tool for storing, retrieving, processing, and displaying data from mining projects. Computer applications have been developed to take much of the burden of pit design from the engineer.

The computer efforts can be divided into two groupings:

  1. Computer-assisted methods. The calculations are done by the computer under the direct guidance of the engineer. The computer does not do the entire design but only does the brunt of the calculation work with the engineer controlling the process. Examples would be the two-dimensional Lerchs-Grossman technique and the three-dimensional design using an incremental pit expansion method.
  2. Automated methods. These are capable of designing the ultimate pit for a given set of economic and physical constraints without intervention by the engineer. One category of automated methods contains the mathematically optimal techniques using linear programming, dynamic programming, or network flows. A second category has the heuristic methods, such as the floating cone method that produces an acceptable pit, but not necessarily an optimal one. As the cost of computer processing decreases, better automated methods will be forthcoming.

Another characteristic differentiating the types of computerized methods is the use of either a whole or partial block for mining. In a whole block method, each block is mined either as a unit or left intact; in a partial block method, a portion of each block can be mined. Each type has certain advantages:

  1. Accuracy—With the use of partial blocks, the tonnage of small volumes can be calculated quite accurately. The overall tonnage of the pit may be accurate using a whole block method, but, the accuracy is less for smaller volumes.
  2. Physical constraints—The desired pit slopes and pit boundaries are approximated by the mined blocks. The use of whole blocks may result in pit walls that are unacceptable in terms of operations and slope stability. Some whole block techniques may assume the block size is a function of the pit slope and some may not allow the slope to vary in the pit. Smoothing is usually required for an ultimate pit designed using whole blocks.
  3. Cost—When properly used, whole block methods have generally proven to be less costly in terms of computer costs than partial block methods. As a result, several pit configurations can be quickly analyzed with a whole block method to give a good basis for a more detailed partial block analysis.

Lerchs-Grossman Method

The two-dimensional Lerchs-Grossman method will design on a vertical section the pit outline giving the maximum net profit. The method is appealing because it eliminates the trial-and-error process of manually designing the pit on each section. The method is also convenient for computer processing.

Like the manual method, the Lerchs-Grossman method designs the pit on vertical sections. The results must still be transferred to a pit plan map and manually smoothed and checked. Even though the pit is optimal on each section, the ultimate pit resulting from the smoothing is probably not optimal.

The example in Fig. 5.2.5 represents a vertical section through a block model of the deposit. Each square represents the net value of a block if it were independently mined and processed. Blocks with a positive net value have been shaded in the figure. The block size has been set in the example so that the pit profile will move up or down only one block at most as it moves sideways.

Figure 5.2.5.

Step 1

Add the values down each column of blocks and enter these numbers into the corresponding blocks in Fig. 5.2.6. This is the upper value in each block of Fig. 5.2.6 and represents the cumulative value of the material from each block to the surface.

Figure 5.2.6.

Step 2

Start with the top block in the left column and work down each column. Put an arrow in the block pointing to the highest value in:

  1. the block one to the left and one above,
  2. the block one to the left,
  3. the block one to the left and one below.

Calculate the bottom value for the block by adding the top value to the bottom value of the block the arrow points to. The bottom value in each block represents the total net value of the material in the block, the blocks in the column, and the blocks in the pit profile to the left of the block. Blocks marked with an X cannot be mined unless more columns are added.

Step 3

Scan the top row for the maximum total value. This is the total net return of the optimal pit. For the example, the optimal pit would have a value of $13. Trace the arrows back to get the outline of the pit. Figure 5.2.7 shows the pit outlined on the section. Note that even though the block on row 6 at column 6 has the highest net value in the deposit it is not in the pit. To mine it would lower the value of the pit.

Figure 5.2.7.

INCREMENTAL PIT EXPANSION

The incremental pit expansion technique is a trial-and-error process guided by the engineer. Although this method will not necessarily produce an optimal pit, in the hands of a skillful engineer it is a very powerful tool. Either whole or partial blocks can be used.

The engineer will digitize the outline of a new pit bottom or an expansion to a pit wall. The computer projects this shape upwards in conformance with the pit slopes to be used. The resulting expansion should be graphically shown to the engineer for confirmation that the increment is as expected.

If the expansion is agreeable to the engineer, a tabulation is done for the material in the increment. The shape of the expansion at the midpoint of each bench is used with the block values for the bench to calculate the grade, tonnes of ore, tonnes of waste, revenues, and costs of the increment. If the increment meets the criteria of the engineer, it is kept in the pit and another outline is digitized. In this manner, the size of the pit gradually grows as the engineer outlines each increment and decides if it meets the design criteria.

To be most effective, the design should progress from the upper benches downward and from the higher grade areas outward on each bench. This is to ensure that only those blocks that can pay for themselves will be included in the pit.

FLOATING CONE METHOD

The most popular automated method has been the floating cone method. The concept is similar to the incremental pit expansion but the manual intervention can be minimized or eliminated.

Instead of a digitized bottom, one block or a group of blocks forms the base of the expansion. If the grade of the base is above the mining cutoff grade, the expansion is projected upward to the top level of the model as in Fig. 5.2.8. The resulting cone is formed using the appropriate pit slope angles.

Figure 5.2.8.

All blocks that are encompassed by the cone (and are not considered previously mined) are tabulated for the costs of mining and processing and for the revenues derived from the ore. If the total revenues are greater than the total costs for the blocks in the cone, the cone has a positive net value and is economic to mine. The surface topography is then altered to reflect the simulated mining of the cone. The topography is left unchanged unless the cone value is positive.

A second block is then examined, as shown in Fig. 5.2.9. Assuming the first cone had a positive value and was included in the pit, only the blocks in the shaded portion need be tabulated.

Figure 5.2.9.

Each block in the deposit is examined in turn as a base block of a cone. For a large model, this can be a costly process. The resulting pit is also dependent on the pattern in which the next base block is chosen. For example, a base block on an upper level may not have been economic when initially examined. If part of the waste covering it is stripped by mining a cone from a lower level, the block should again be checked before another block from a lower level is used as a base block. This is necessary to make each cone pay for itself.

Because of this potential problem, an engineer can intervene in the process. The engineer can define a smaller volume in which all base blocks will be checked by the computer. From the results of the cones in this smaller volume, the engineer can specify another volume to check. With this added control, the selection sequence of base blocks is less of a problem.

REFERENCES

Barnes, M.P., 1980, Computer-Assisted Mineral Appraisal and Feasibility, AIME, New York.

Kim, Y.C., 1978, “Ultimate Pit Limit Design Methodologies Using Computer Models—The State of the Art,” Mining Engineering, Vol. 30, No. 10, pp. 1454–1459.

Koskiniemi, B.C., 1977, “Hand Methods in Open-Pit Mine Planning and Design,” Open Pit Mine Planning and Design, J.T. Crawford and W.A. Hustrulid, eds., AIME, New York, pp. 187–194.

Lerchs, H., and Grossman, I.F., 1965, “Optimum Design of Open-Pit Mines,” Transactions, Canadian Institute of Mining and Metallurgy, Vol. 68, pp. 17–24.

Miller, V.J., and Hoe, H.L., 1982, “Mineralization Modeling and Ore Reserve Estimation,” Engineering and Mining Journal, Vol. 183, No. 6, pp. 66–74.

Soderberg, A., and Rausch, D.O., 1968, “Pit Planning and Layout,” Surface Mining, E.P. Pfleider, ed., AIME, New York, pp. 141–165.

Pana, M.T., and Davey, R.K., 1973, “Pit Planning and Design,” SME Mining Engineering Handbook, A.B. Cummins and I.A. Given, ed., AIME, New York, pp. 17.1–17.19.

Pana, M.T., and Davey, R.K., 1973a, “Open-Pit Mine Design,” SME Mining Engineering Handbook, A.B. Cummins and I.A. Given, ed., AIME, New York, pp. 30.7–30.19.

Taylor, H.K., 1972, “General Background Theory of Cutoff Grades,” Transactions (Section A: Mining Industry), Institution of Mining and Metallurgy, Vol. 81, pp. A160–A179.


source : http://books.smenet.org/Surf_Min_2ndEd/sm-ch05-sc02-ss00-bod.cfm

Rabu, 15 Oktober 2008

COAL

DEFINITION

Coal is a physically and chemically complex substance that has been defined in different ways over the years. Currently, the most widely accepted definition is that adopted by the American Society for Testing & Materials (ASTM) which is as follows:

“Coal is a readily combustible rock containing more than 50 percent by weight and more than 70 percent by volume of carbonaceous material including inherent moisture, formed from compaction and induration of variously altered plant remains similar to those in peat. Differences in the kinds of plant materials (type), in degree of metamorphism (rank), and in the range of impurity (grade) are characteristic of coal and are used in classification (ASTM, 1970, p.70).”

CLASSIFICATION

Because of the complexity of its physical and chemical properties and its varied uses, the classification of coal is not a simple task. Many classification schemes for coal have been proposed over the years using a variety of parameters as criteria. Of the various approaches to classification, rank is one of the more important. Rank is a measure of a coal’s thermal maturity, that is, its position in the coalification series. Coalification refers to the progressive transformation of peat through lignite, subbituminous, bituminous, and anthracite. The standard rank system used in North America is the ASTM system (Table 2.6.1). It is based primarily on fixed carbon, volatile matter, and calorific value and utilizes familiar rank terms such as lignite, bituminous, and anthracite. In the ASTM system, these terms have specific meaning with regard to the aforementioned parameters, but may conflict with meanings given to the same terms in another country’s classification system.

Coals are also classified by type into two broad categories: (1) sapropelic or nonbanded coal, and (2) humic or banded coal. Nonbanded coals exhibit little or no apparent stratification, are frequently granular in texture, tend toward homogeneity, and are allochthonous in origin. Examples of nonbanded coals are boghead coal, composed primarily of algal remains, and cannel coal, which consists largely of spores.

Banded coals, by contrast, are composed of a series of layers which are parallel to the bedding and which can be distinguished on the basis of macroscopic characteristics such as luster, hardness, etc. These bands are known as lithotypes and are composed, in turn, of macerals, which are the microscopically identifiable components of coal. Macerals are defined on the basis of color, morphology, association, and fluorescence. Maceral analysis plays an important role in the coal evaluation process and yields valuable information concerning the nature of the paleoenvironment in which the coal was formed, the degree of thermal maturity of the coal (rank), and its suitability for particular uses.

Of the two types, banded coals are by far the more abundant and constitute the majority of the world’s coal resources.

ORIGIN OF COAL

Coal is formed by the accumulation and preservation of organic material (primarily from plants) in swamp, marsh, or bog environments. This plant material is altered into peat by complex biochemical processes that are still poorly understood. Peat accumulates very slowly relative to the human lifespan. Accumulation rates in Florida and the Mississippi Delta are from 0.5 to 1 mm/a, whereas in Borneo rates of up to 4 mm/a have been recorded. Generally, accumulation rates are higher in tropical climates than in temperate to cool climates although the higher growth rates are partially offset by slower decomposition rates in cooler climates. Peat can accumulate whenever accumulation rates are higher than the rate of decomposition. Most ancient coals probably originated in temperate to tropical climates (Bustin, et al., 1983).

As geological conditions change in peat-forming areas, the peat deposits may become buried by subsequent influxes of sediment. Sedimentation may continue for very long periods of time and, when coupled with the subsidence of the depositional basin, can result in the burial of the peat deposit under thousands of meters (feet) of sediment. Heat and pressure generated by the weight of the sedimentary column, in addition to biochemical and geochemical processes, cause the coal to increase in rank. The level of coalification attained is primarily a product of temperature and length of time of heating. Because in most stratigraphic sequences temperature increases uniformly with depth, the more deeply buried coals are generally of higher rank. Generally, as rank increases, porosity, volume, volatile constituents, and water decrease, while fixed carbon, density, heating value, and reflectance increase.

The main byproducts of coalification are methane, carbon dioxide, and water. Water is lost early in the coalification process and the ratio of methane to carbon dioxide increases with rank (Bustin, 1983). Large volumes of methane may be generated during coalification and can be produced and marketed either in conjunction with underground mining or as an independent venture. In fact, in the Black Warrior Basin of Alabama, 25% of natural gas production on an annual basis currently comes from degasification of deeply buried coal beds.

DEPOSITIONAL ENVIRONMENTS

The thickness, lateral distribution, composition, and quality of a coal bed are determined to a great extent by the depositional environment. Moreover, Home, et al. (1978) found that the aforementioned characteristics were determined by the depositional environments that preceded, were coeval with, and that immediately followed deposition of the peat. The preceding environment shapes the topography on which the peat is deposited and therefore affects the thickness and lateral extent of the deposit. Contemporaneous environments affect seam continuity and composition whereas later environments may affect the peat by partial or complete removal of the deposit by erosion or, if brackish or marine waters are introduced, alteration of peat chemistry and therefore of coal quality.

Coal-forming environments can be divided into two broad categories: (1) paralic, which refers to coastal or near-coastal marine settings, and (2) limnic, which refers to coals formed inland, usually in intermontane regions and under freshwater conditions. Generally, limnic coals are characterized by thick beds of limited lateral extent. Although some of the coals in the western United States are limnic in origin, most North American coal deposits appear to have formed in paralic environments.

Paralic environments can occur in back barrier, deltaic, or coastal and interdeltaic settings (Bustin, 1983). Back barrier coals develop landward of barrier islands, frequently in abandoned lagoonal basins that are formed between the barrier islands and the mainland. Back barrier coals are typically rather thin, laterally discontinuous deposits that are elongate parallel with depositional strike and that are usually high in sulfur and ash.

Coastal plain coals develop on low, relatively flat, subsiding coasts that have a high water table and little influx of sediment. Some of the more persistent coals in the Appalachians of the eastern United States may have been deposited in coastal plain settings. Modern coastal plain swamps that are active sites of peat accumulation include the Everglades of Florida and the Okefenokee Swamp of Georgia (Bustin, 1983).

Many ancient coals are interpreted to have formed in deltaic systems and thus depositional environments associated with deltas have been the subject of intensive investigation.

The following comments on coal-forming environments in deltaic systems are drawn from Home, et al. (1978). Depositional modeling can be used to predict large-scale trends in coal deposits on a regional scale and are therefore useful in the initial phases of coal exploration. Further, small-scale variations in coal thickness, quality, and lateral continuity frequently can be predicted, providing data that can be extremely valuable in mine planning and development.

The following illustration (Fig. 2.6.1) was derived from a detailed data base developed from the coal-bearing carboniferous-age rocks of eastern Kentucky and southwestern Virginia and from similar environments in contemporary coastal areas. Figure 2.6.1 illustrates the typical shape and lateral extent of coal deposits which form in the different environments within the deltaic setting.

Figure 2.6.1.

Coals that form in lower delta plain environments are typically elongate parallel with depositional dip because the only environments suitable for peat accumulation are adjacent to relatively narrow levees on either side of distributary channels. Interdistributary bays occur between the distributary channels and are sites of accumulation of fine-grained bay-fill detrital sediments. Sites of peat accumulation on the lower delta plain are generally restricted to the elongate, relatively narrow areas between the levees and the interdistributary bays. Lower delta plain coals are usually relatively thin and contain splits caused by crevasse splays that breach the poorly developed levees along the distributary channels.

Upper delta plain-fluvial coals also tend to be elongate in the direction of depositional dip although they are not as continuous in that direction as the lower delta plain coals. Deposits typically formed as pod-shaped bodies on flood plains adjacent to coexisting meandering channels and exhibit significant thickness variations over short distances. Also, as in the case with lower delta plain coals, numerous splits can occur near the levees bordering active channels because of splays. Post-deposition shifting of channels can also complicate the sedimentary sequence by eroding the coal deposit and creating “washouts.”

In some locales, a transitional zone exists between the lower and upper delta plain environments that exhibits characteristics of both lower and upper delta plain sequences. In the transition zone between the lower and upper delta plains, many of the large interdistributary bays (flood basins) that occur between distributary channels have filled with sediment and provide broad basins in which large coal swamps can develop. These broad, relatively uninterrupted basins provide a favorable environment for the formation of coal deposits that are typically more laterally extensive than those of the lower and upper delta plain proper. Coals formed in this transitional zone share some characteristics with upper and lower delta plain coals such as splits that develop near levees and post-depositional washouts. Most of the more economically important coal beds in the Appalachian coal region are interpreted to have developed in this transitional zone between the lower and upper delta plains.

From the foregoing brief discussions, it is apparent that, in the initial phases of exploration, a knowledge of the depositional environments that control the shape and configuration of the coal body will enable explorationists to design a drilling program for maximum effectiveness and efficiency in defining the coal deposit. At the lease-tract or mine plan levels of exploration, more detailed drilling and evaluation may be desirable to predict areas of thick and/or high-quality coal.

Depositional environments also partially determine the sulfur content of coal deposits. Sulfur occurs in the form of iron sulfide (predominantly pyrite) in several ways in coal. A finely disseminated form sometimes referred to as framboidal pyrite is the most reactive form of pyrite and the most difficult to remove. It is so finely disseminated throughout the coal that it cannot be removed effectively in float-sink washability tests. Research suggests that framboidal pyrite originates from sulfur produced by microorganisms found in marine to brackish waters, but not in fresh water. It has been shown (Ferm, 1976; Caruccio, et al., 1977) that framboidal pyrite is most strongly associated with-coals overlain by roof rocks deposited in marine to brackish-water environments. Exceptions occur when a blanket of sediment (such as a crevasse splay) is introduced early enough to shield the peat deposit from later marine to brackish-water transgressions. It follows that coals which formed in back barrier to lower delta plain environments are more likely to be overlain by sediments deposited by marine to brackish water and hence will be more likely to contain higher amounts of framboidal pyrite.

Coals that formed in transitional lower delta plain environments are subject to a mix of fresh and brackish to marine water influences and hence are highly variable in their sulfur content. Generally, however, transitional lower delta plain coals are considered to be lower in framboidal pyrite than coals deposited in lower delta plain and back barrier settings. This trend is thought to continue for coals formed higher in the delta plain in fluvial-upper delta plain settings where marine influence is uncommon. These coals are generally considered to be lower in finely disseminated pyritic sulfur than coals formed in other delta plain depositional settings. An understanding of the depositional setting in which a coal bed formed can therefore be used to predict the amount and type of sulfur present and to guide the exploration for low-sulfur coals in areas where sulfur contents are usually high.

Investigations by Caruccio, et al. (1977) and Home, et al. (1976), serve as examples to illustrate the potential usefulness to mine developers of understanding the depositional history of a coal bed. Using a data base of 450 core holes in a 518-km2 (200-sq-mile area) located in the Appalachian coal region of the eastern United States, the investigators interpreted the target coal bed to have been deposited in a lower delta plain setting. Typically, coals interpreted as lower delta plain coals, where overlain by brackish to marine rocks, have sulfur contents of greater than 2% with 75% or more of the sulfur occurring in the form of framboidal pyrite (Caruccio, et al., 1977). Where deposits interpreted as freshwater splays were emplaced over the peat surface prior to the deposition of the marine rocks, the peat apparently was shielded from the sulfur-reducing bacteria, causing the sulfur content in the peat to remain low (Home, et al., 1976).

Figures 2.6.2 and 2.6.3 summarize the investigative results of Horne (1978). Figure 2.6.2 is an interpretation of the depositional environments after deposition of a coal bed. The data suggest that the levees of a distributary channel in the southwestern part of the area were breached and splay deposits encroached to the north and east over the coal and into the marine-influenced interdistributary bay. Figure 2.6.3 shows the distribution of disseminated sulfur in a target bed. A comparison of Figs. 2.6.2 and 2.6.3 illustrates the expected association between areas where the coal is overlain by marine beds (the eastern part of Fig. 2.6.2) and higher sulfur concentrations. In the western and southern parts of the diagrams where the wedge of nonmarine splay deposits covered the coal, sulfur contents are correspondingly lower.

Figure 2.6.2.

Figure 2.6.3.

The relationships shown in these diagrams between disseminated sulfur content and specific depositional environments suggest that exploration drilling programs at the lease-tract level should be devised to gain an understanding of the depositional setting of a coal deposit and to define such depositional features as might cause significant variation in the physical or chemical characteristics of the coal.

STRUCTURAL FEATURES AND THEIR EFFECTS ON COAL DEPOSITS

Structural features are those features provided by post-depositional deformation or displacement of the rocks. Such features can form concurrently with, or shortly after, deposition of the sediment, such as slumps or differential compaction of soft sediments having different densities. These soft-sediment structures can and do sometimes affect the continuity of coal deposits. More commonly, however, it is structural features such as folds and faults that formed later in the history of the rocks as a result of tectonic forces that determine the present attitudes of the coal beds.

Inclined or Folded Strata

Rock sequences deform plastically under conditions of high temperature and confining pressure and hence may be tilted or folded into a series of subparallel to parallel upwarps and downwards termed anticlines and synclines, respectively (Fig. 2.6.4). Folding may be so intense as to lift the strata to the vertical or even to an overturned position. All uplifted strata are more susceptible to subarea! erosion with areas of maximum uplift having the greatest degree of susceptibility. Therefore anticlinal crests are often severely denuded, creating a breached structure and interrupting the areal continuity of any coal beds. The tilting and folding of strata containing coal beds therefore complicates efforts toward correlation of beds from area to area and also imposes constraints on mining operations in areas of intense structural deformation. In most cases the overburden increases more rapidly away from the outcrop in downward pitching coal beds than in flat-lying deposits, reducing the amount of coal that is economically recoverable.

Figure 2.6.4.

Faults

A fault is a fracture or fracture zone along which displacement has occurred on one side of the fracture relative to the other. Faults are important considerations in coal exploration and mining and, depending on local conditions, can render an otherwise attractive area unsuitable for mining.

There are several types of faults defined by the direction of relative motion across the fault plane. The two types of faults most commonly encountered in coal exploration are normal faults and reverse faults. Normal faults occur where the block above the fault plane (termed the hanging wall) moves down relative to the lower block (the footwall) (Fig. 2.6.5). The effect of drilling through a normal fault is that of an apparent shortening of the rock section by the elimination of strata from the column of rock penetrated by the drill. This can be illustrated by visualizing vertical boreholes that penetrate the fault plane on the front panel of Fig. 2.6.5. The point where the wellbore enters the footwall is stratigraphically lower than the corresponding point on the hanging wall by an amount equal to the vertical displacement of the fault (AB).

Figure 2.6.5.

In the case of a reverse fault, the hanging wall moves up relative to the footwall, and repeated sections of strata are encountered (Fig. 2.6.6). Vertical displacement is represented by line AB and, once again, each borehole allows a different interpretation of the nature and position of the target coal bed. Indeed, if the middle borehole was the only source of data, an observer might conclude that two coal beds were present if the intervening strata were not carefully evaluated. These examples emphasize the need for a carefully planned drilling program, especially in areas where existing data indicate the presence of faulting. Where faults are known to occur, the drilling program must be designed to yield sufficient data to allow adequate mapping of the type and extent of faulting present as well as the amounts of displacement so that the effects on the coal beds can be accurately determined.

Figure 2.6.6.

Joints and Cleats

Joints are fractures in a rock mass across which no displacement has occurred. Joints are commonly planar, occur in groups of subparallel to parallel fractures called sets, and may extend, both vertically and laterally, for distances from as little as a few millimeters (inches) up to many tens of meters (feet) or more. Where jointing is prevalent, it can be a factor in mine planning because it represents existing planes of weakness in the overburden along which the rock will preferentially break during mining. Surface mine highwalls are therefore sometimes planned to parallel the orientation of dominant joint trends and hence take advantage of these natural fracture systems to facilitate blasting and overburden removal.

Cleats are naturally occurring fractures in coal beds (primarily in bituminous coals) that are morphologically analogous to jointing in rocks. Cleats typically occur in two mutually perpendicular sets. Fractures of the dominant set are called face cleats. Face cleats are penetrative, closely spaced fractures that serve as primary conduits for fluids such as methane gas, which is a byproduct of coalification, and ground water. Butt cleats form the complementary, less dominant cleat set and are typically irregular, nonpenetrative fractures that stop against a face cleat, occur over a broader range of orientations, and serve, to a lesser extent, as conduits for fluids. Because of their permeability, cleats in general, and especially face cleats, are often sites of mineralization and deposits of minerals such as pyrite, calcite, and others.

Cleat orientations can be important in mine planning for much the same reasons as joints, that is, they represent natural planes of directional weakness which can facilitate the cutting and loading of an exposed coal bed in a surface mine. Although probably of lesser importance generally than jointing in rocks, cleat orientations have determined, in certain cases, mine layout and the direction of mining.

Clastic and Igneous Intrusions

Perhaps of lesser importance in most locales than the previously discussed structural features is the intrusion of either elastic (sedimentary) material or igneous masses into a sequence of coal-bearing rocks. These intrusions may parallel bedding planes or cut across bedding. In the former case, the features are called sills, in the latter, dikes. These structures can range in thickness from a fraction of a millimeter (inch) up to many tens of meters (feet) and, in certain mining locales, can present significant problems. In the case of a elastic intrusion, the intruded material is waste material and must be separated and removed from the coal but does not alter the physical characteristics of the coal. In an igneous intrusion, the coal in the immediate vicinity of the intrusion is thermally altered. The alteration can result in an increase in rank or even the coking of immediately adjacent coal. An added problem with igneous intrusions is that the igneous rocks are much harder than coal and the associated sedimentary rocks, thereby increasing the difficulty of mining in these areas.

EVALUATION OF COAL DEPOSITS

Determination of the Amount of Coal in Place

Once the decision has been made to proceed with a detailed evaluation of the coal deposits on a particular tract with the purpose of opening a surface mine, a data base must be generated at a level of detail sufficient to characterize the coal and overburden. A number of geological or geophysical techniques can be used to provide data. In areas where a blanket of unconsolidated material was deposited on the erosional upper surface of the underlying bedrock, seismic refraction, seismic reflection, or, in some cases, gravity surveys can reveal the configuration of the bedrock surface. Also, faults with vertical displacements no smaller than 6.1 m (20 ft), or under ideal conditions 4.6 m (15 ft), can be identified using seismic techniques (Daly, et al., 1976). In the event that igneous intrusives are present, gravity or magnetic techniques can be used to assist in definition of the igneous-sedimentary boundary.

All of the foregoing techniques can, under certain conditions, supply useful data to the coal explorationist but, as general exploration tools, they lack the resolving power for widespread exploration in the coal industry. The carefully planned drilling program remains the primary exploration technique in the coal industry and provides the bulk of the raw data from which coal and overburden characterization maps are made and upon which mining decisions are based.

At the lease tract level, drilling is used primarily to define areas of thick coal and to determine coal quality. These data are then used to calculate measured reserves. Drill-hole density necessary to prove reserves varies with the complexity of the geology and the degree of consistency in coal bed thickness. In areas of structural complexity or where coal bed thickness is highly variable, drill-hole spacing may be as close as one hole every 1.6 ha (4 acres) (Reilly, 1968). Conversely, in geologically undisturbed areas where coal bed thickness is relatively constant, drill holes are sometimes spaced 0.4 km (0.25 mile) or more apart. Local variations in coal-quality parameters (such as sulfur content) constitute another reason to increase drilling density if those parameters are critical in determining the marketability of the coal. Accuracy of the reserve estimate should be within 20% and the drilling program should be geared to produce figures at this level of accuracy (Wier, 1976).

In planning a surface mine, coring of all the exploratory holes probably will not be required. A sufficient number of holes should be cored to allow the geologist to determine the depositional environment of the coal and thereby to make decisions for location of additional test holes and for mine planning. Data from the cored holes will also be useful in determining the type of blasthole equipment and bits as well as types of mining equipment that will be most appropriate. Otherwise, exploratory holes can be drilled by the less expensive air-rotary method. When a coal sample is needed in a particular area, a rotary hole is drilled to establish the elevation of the coal. A second hole is then put down immediately adjacent to the first, still using the rotary bit but stopping the hole just above the position of the coal bed as established by the previous hole. The core barrel and bit is then substituted for the rotary bit and the coal bed is cored.

Systematic sample or core descriptions and recording of coal thicknesses and depths are necessary to insure reliable integration of the data onto the various interpretive maps which are important tools in the evaluation process.

All exploratory holes should have geophysical logs run soon after the drilling is completed. Effective geophysical logging can reduce the number of drill holes required to evaluate a property by maximizing the data obtained from each hole. Geophysical logs serve as a check on written logs and provide a precise record of coal depth and thickness. Geophysical logs of cored holes also provide a means of identifying lithologies in intervals where the core is lost by comparing the logging tools’ response to different lithologies in other cored intervals. The basic borehole geophysical suite (calibrated density, gamma, resistivity) can provide the following data: (1) coal thickness and depth;(2) lithologic data; (3) depositional data—nature of contacts and vertical stratigraphic sequences; (4) hydrologic data—aquifers, lost circulation zones, water levels; (5) identification of structural data and stratigraphic sequences; (6) recognition and correlation of specific coal intervals is augmented by their individual signatures; and (7) recognition of subtle mineralogic changes, such as alteration, that are difficult to discern from cuttings (modified from Crowder, 1986).

More sophisticated geophysical surveys can provide many more types of data such as the coal quality parameters of ash, carbon, volatile matter, heat content, moisture, mineral matter, and rank, at greater cost. Crowder (1986) estimates the cost of a basic geophysical logging suite at 10 to 20% of a total rotary drilling budget with more sophisticated logging suites increasing the cost to as much as 50%. A partial list of geophysical tools and their application to coal exploration is given in Table 2.6.2.

Once the data are assembled, the very basic task of correctly identifying and correlating the coal beds throughout the area of interest must be performed. Extra care must be taken in areas where multiple coal beds of varying thickness are present in close stratigraphic proximity to each other. Incorrect identification of the beds can result in a misleading evaluation of the property, which, in turn, can cause severe problems in the mining, preparation, or marketing aspects of the operation. Geologists commonly use physical characteristics of the overburden, physical and chemical characteristics of the coal bed, distinctive stratigraphic markers or sequences, signatures on geophysical logs, and any other pertinent data to assist in the correct identification and correlation of the coal beds. In more complex areas, additional data may have to be obtained in certain parts of the tract by the drilling of more closely spaced holes before correlations can be made with confidence.

When the coal bed stratigraphy has been worked out, it is useful to construct a series of maps using data obtained from field investigations, drilling, and laboratory work that will depict coal and overburden thickness, overburden to coal ratio, and significant analytical parameters. These maps may be prepared as a series of registered mylar overlays to allow simultaneous viewing of different combinations of data or they may be constructed separately. A structure contour map with the target coal bed as the datum horizon is also an extremely useful method to depict structural or stratigraphic features that affect the topography of the coal bed. Alternatively, the available data can be entered into a computer data base and appropriate software programs can be utilized to portray stratigraphic, mining, or economic conditions at desired scales.

Coal and overburden thickness maps (termed isopachs) are constructed by plotting the appropriate thickness values on a base map and constructing contour lines (isopachs) representing regularly increasing or decreasing thickness intervals using the plotted values as guides for positioning the contour lines. An example of a coal isopach map is given in Fig. 2.6.7. Similar isopach maps can be constructed showing overburden thickness. Isopleth maps can be constructed using the same principle as the isopach maps, but substituting various coal quality parameters for the thickness data (Fig. 2.6.8). This type of data presentation would have application where relatively small variations in coal quality would significantly impact the marketability of the coal.

Figure 2.6.7.

Figure 2.6.8.

Using the same coal and overburden thickness data, lines representing overburden to coal ratios or “mining ratios” (expressed as 5:1, 10:1, etc.) can be drawn. A ratio of 5:1 means that the line represents the limit at which 0.9 t (1 ton) of coal can be extracted by removing not more than 3.8 m3 (5 cu yd) of overburden. Strictly speaking, this is not a true ratio because it has dimensions of overburden volume per unit weight of extracted coal. The inclusion of these units gives rise to a difference between ratios computed in English units and ratios computed using metric units.

Ratios of thickness values alone cannot be used to deduce mine economics. These values must be converted to cubic meters (cubic yards) of overburden per tonne (ton) of coal to provide data to the analyst in units that can be more readily equated to mining costs. The conversion from feet (or meters) to cubic yards (or cubic meters) per ton (or metric ton) is outlined as follows from Wier (1976):

or

where OB is the thickness of overburden, C is the thickness of the coal, and SG is the specific gravity of coal. Because of the differences in the units of the English and metric systems, ratios calculated in the metric system are about 0.8 (0.842778) that of the English system. If specific gravity is not known, but the rank of the coal is known, the following specific gravity values are commonly used (Averitt, 1975):

RankSpecific gravity
Anthracite1.47
Bituminous1.32
Subbituminous1.30
Lignite1.29

It should be noted, however, that it is important to use correct specific gravity values whenever possible because if the coal contains much mineral matter that has a higher specific gravity than coal, it will skew the results by decreasing the ratio and increasing calculated tonnages per unit area by a possibly significant amount.

Once the overburden lines have been established, the area can then be divided into mining units according to the mining plan and reserves can be calculated. The terms “measured,” “indicated,” and “inferred” are commonly used to define reserve categories with measured reserves having the highest level of reliability and inferred reserves the lowest. The distance between points of measurement distinguish the different reliability categories. Wood, et al. (1983) uses 0.8, 2.4, and 9.7 km (0.5, 1.5, and 6 miles) as the maximum distance between points of measurement for measured, indicated, and inferred coal, respectively. In mining, however, typical exploration drill-hole spacing is sufficiently close to classify all reserves as measured.

The first reserve determination is for total coal in place. The basic calculations in metric and English units, and using specific gravity for a given coal as determined in the laboratory, are as follows:

where SG is specific gravity, C is thickness of coal in meters, and A is area in square meters.

In English units:

where C, A, and SG are the same terms as in the metric equation, and 1359.7 is a constant required to establish the correct tonnage factor to use with the English units.

If average specific gravity values are used, the following values are frequently used as tonnage factors in a simplified form of the equation.

RankTons of coal per acreper ft of thickness
Anthracite2000
Bituminous1800
Subbituminous1770
Lignite1750

In the case of bituminous coal the calculation would then be as follows:

Resource figures calculated from the preceding equations are estimates only and are not precise, but, with a thorough exploration drilling program, should be accurate to ± 10% (Wier, 1976). This figure must then be adjusted downward to account for losses incurred during excavating, handling, processing, and transporting the coal. Anticipating the magnitude of these losses is usually accomplished by drawing on previous mining experience in the area, and in the case of processing or preparation loss, by reviewing washability studies. In some cases, cumulative loss can approach 50% of the total in situ reserve.

Coal Quality

The determination of coal quality is an integral part of the coal exploration process. It is as important as any of the other factors that are used to determine the mining potential of a given tract. Unlike many of the other factors, however, certain coal quality parameters can be changed to meet user specifications by coal cleaning technology and/or by blending with other coals. The following discussion touches on the more commonly used types of analytical data from the viewpoint of the information they convey to the explorationist about the suitability of his product for certain end uses.

Analytical procedures for testing coal have been continuously refined and updated for the past several decades by ASTM (American Society for Testing & Materials). Many major consumers and large coal-mining companies maintain well-equipped laboratories to analyze coal. The consumer does this to check the quality of the coal he purchases, whereas the producer must stay abreast of the quality of the product, especially as the mining operations advance into new territory. Analytical work is sometimes contracted out to independent laboratories either to run the primary analyses or to serve as quality control checks on the mining company’s or consumer’s results.

The analyses most frequently performed on coal include proximate analysis, calorific value, and sulfur. The proximate analysis consists of the determination of moisture, volatile matter, fixed carbon, and ash. Other types of analyses frequently performed on coal include ultimate analyses, the determination of free swelling index, and determination of trace element content.

Moisture: Moisture occurs naturally in coal beds in a number of different forms and has been determined by many methods. The most common method is through some procedural variant of the measurement of weight loss upon heating. These procedures measure the surface moisture and the inherent moisture, which is that moisture contained in the capillary system of the coal. Moisture contained in the molecular structure of the mineral matter present in coal (water of hydration) is not accounted for by this type of determination, nor does it include the moisture liberated by the thermal decomposition of organic matter in the coal. The water of hydration is commonly assigned a value of 8% of the ash value and the water of decomposition is not considered significant in most applications. For a more detailed discussion of analytical techniques used in this and other aspects of coal characterization, see Rees (1966). Total or “as-received” moisture, which is the value frequently given in coal analyses, is a combination of surface and inherent moisture and is used for calculating other parameters to the as-received basis. Total or as-received moisture values are critical because coal contracts are often based on as-received calorific values, usually measured in British thermal units per pound of coal, which are obtained by converting dry calorific values to as-received calorific values using total moisture content. Because of the tonnages involved in most contracts, and the fact that most contain penalty clauses for coal that does not meet specifications, even a small error in the moisture content value used to determine as-received calorific value can result in significant financial losses. Moisture content also plays an important role in handling and processing coal. As little as 0.5% surface moisture can cause coal to stick in a chute. Higher moisture contents also cause a decreased coke yield in coke ovens.

Volatile Matter: Volatile matter measurements do not reflect the actual amount of a given substance present in a coal sample but rather are measures of thermal decomposition products that form during the heating of a coal sample under rigidly specified conditions. Examples of volatile materials driven off in the heating process include water, hydrogen, carbon dioxide, carbon monoxide, hydrogen sulfide, chlorine, tar, ammonia, and a variety of organic compounds.

Volatile matter is a parameter used in some coal classification systems. It is used indirectly in the ASTM system for distinguishing between coals of medium volatile bituminous and higher rank. Volatile matter values provide useful information in matching specific coals to appropriate combustion equipment and are also of importance in selecting processes and conditions for the gasification and liquefaction of coal. As a general rule, the best metallurgical grade coking coals contain between 15 and 31% volatile matter and are ranked low- to medium-volatile bituminous in the ASTM system.

Fixed Carbon: Fixed carbon is the carbon that remains in the sample after determination of the volatile matter. The numerical value of fixed carbon is obtained by subtracting the sum of moisture, ash, and volatile matter from 100.

Fixed carbon values are used on a dry, mineral-matter-free basis as boundaries between coals ranked as medium-volatile bituminous and higher in the ASTM system. Because the amount of fixed carbon and ash is an approximation of the amount of coke produced, the fixed carbon value is used to estimate coke yield. It is also used in calculating the efficiencies of combustion equipment.

Ash: Ash is the noncombustible residue that is left when coal is burned. This ash residue derives from two basic sources within the coal bed including: (1) extraneous detrital particles of shale, clay, etc., and secondary mineral material such as calcite, pyrite, and marcasite; and (2) inorganic elements chemically bound in the organic compounds making up the coal. The detritus and secondary minerals make up the most significant part of the ash content. It should be noted that the terms ash and mineral matter are not synonymous. Ash, as stated previously, is the residue left after burning a quantity of coal in the presence of air. The amount of mineral matter present in a coal can be determined by a point count performed on a specially prepared coal sample using a petrographic microscope. A simpler method for determining mineral matter if the ash and total sulfur values are known is by using the Parr formula as follows:

where MM is mineral matter, A is percentage of ash in the sample, and S is the percentage of total sulfur in the sample. The Parr formula is probably the method most widely used in the United States for determining mineral matter content.

The amount of ash contained in a coal, as well as the ash’s composition, affect the coal’s performance and therefore its success in the marketplace. Even in very clean coals the ash content may be 2 to 3% and ash contents of 10% or more are not uncommon in many productive coal beds. Carbonaceous material in the coal bed that contains excessive ash is frequently termed bone, bone coal, carbonaceous shale, black shale, rash, or any of a number of other locally used terms. The greater the ash content of a coal bed the lower is the heating value per unit weight of the coal, so that, in higher ash coal, more coal is required to produce a given amount of heat and disposal of the ash residue also becomes a problem. The ash content of a coal usually can be lessened by “washing” the coal in coal preparation plants. This usually entails grinding the coal to a specified size, then suspending it in a liquid with a specific gravity intermediate between that of coal and the mineral matter so that the mineral matter tends to sink in the solution and the coal tends to float. By repeating this procedure several times, a significant part of the mineral matter content can be removed from the coal.

Ash varies greatly in composition. It may contain varying amounts of silica and alumina derived from detrital minerals; iron oxides from siderite,pyrite, and marcasite; calcium oxides and carbonates from siderite; iron sulfide from pyrite and marcasite; and magnesium, sodium, potassium, phosphorus, and a wide range of trace elements (Tieman, 1973). Ash fusion temperatures, a measure of the temperatures at which coal ash begins to deform, softens, and becomes fluid, are important coal quality parameters that determine how the ash residue from a given coal will react when it is burned. Ash begins to deform at temperatures that range from 950° to 1700°C (1750° to 3100°F). Ash with fusion temperatures at the lower end of this spectrum is desirable in certain types of furnaces where ash is removed from the bottom in a liquid state but is undesirable in static fuel bed furnaces where removal of the residue is a difficult and costly process. Pyrite and marcasite (FeS2), siderite (FeCO3), calcite (CaCO3), and other carbonate minerals are frequently responsible for low fusion temperatures in ash, whereas high silica or alumina contents are associated with higher fusion temperatures.

Calorific Value: In the ASTM system, calorific value is one of the primary rank-defining parameters for bituminous, subbituminous, and lignitic coals. Calorific value is usually reported in British thermal units per pound, or calories per gram, and can be easily converted from one system to the other.

Coal used for steam electric generation is sometimes sold at a fixed rate per million British thermal units with penalties for excess ash or sulfur (Wier, 1976). Because many contracts specify calorific value on an “as-received” basis and most analytical results are reported on a “dry” basis, dry values must be converted to “as-received” values by means of the following formula:

The conversion formula contains percent moisture as a term, so the importance of accurate moisture values cannot be overstated.

Sulfur: Sulfur presents numerous problems in coal utilization. In combustion applications it can cause corrosion in the boiler or the buildup of heavy fouling in the boiler tubes. Large amounts of SO2 are also generated upon combustion and may contribute to atmospheric pollution unless removed by limestone-based stack scrubbers. The same potential corrosion and pollution problems also apply to the liquefaction, gasification, and coking processes with the additional concern that unacceptably high levels of sulfur might be passed along through the coke to the iron and steel resulting in an inferior product (Ward, 1984).

Three commonly recognized forms of sulfur in coal are sulfate sulfur, pyritic sulfur, and organic sulfur. Of these, sulfate is the least important. In fact, sulfate sulfur contents are frequently on the order of 0. 1%. Large sulfate values are sometimes indicative of a weathered sample. Relative amounts of pyritic and organic sulfur vary widely; in some coals total sulfur content is almost all organic whereas in other coals it is virtually all pyritic. It is important to analytically distinguish between organic and pyritic sulfur in coal because at least some of the pyritic sulfur can be removed by specific gravity separation methods. Pyritic sulfur occurs in the minerals pyrite and marcasite and, depending on the particle size of these minerals and the size to which the coal is crushed, half or more of this sulfur can be eliminated. That portion of the pyritic sulfur that occurs in finely disseminated form throughout the coal cannot be removed by specific gravity separation methods. The organic sulfur constituent is part of the hydrocarbon structure of the coal and cannot be removed by conventional coal cleaning technology. Although promising laboratory techniques designed to remove organic sulfur are under investigation, no commercially feasible process currently is available. Until washability tests are performed which can provide specific data, Wier (1976) advocates taking the sum of the organic sulfur content and one-half the pyritic sulfur content as a preliminary indicator of the final total sulfur content of the cleaned coal. Generally, only coals with low sulfur contents are used for steam electric generation. Average sulfur contents of coals received at U.S. power plants of 50 MW or greater generating capacity for the months October through December 1986 ranged from 0.16 to 5.6%. The overall average sulfur content of coal received at these same plants during the same period was 1.36% (U.S. Dept. of Energy, 1987). Variations in maximum allowable sulfur contents in different areas are due primarily to differences in local regulations and to the presence of stack scrubbers at some facilities. The practice of blending different coals to achieve required specifications allows the use of some higher sulfur coals that would not otherwise be suitable for steam-electric generation.

Likewise, in coke production, the use of a high sulfur coal results in a decrease in the amount of coke that can be produced from a given amount of coal. Coal that cannot be cleaned to a sulfur content of less than 1.5% is not likely to be used, even as a blend, for coke production.

Free Swelling Index: Another commonly performed analytical procedure for coal is the determination of the free swelling index (FSI). The FSI is considered useful, although not definitive, in evaluating the coking properties of a coal. It is a measure of the volume increase of a coal when it is heated under specific conditions and is reported in numbers from 0 to 9, with the higher values considered superior from a coking standpoint. FSI values generally increase with rank up to the anthracite rank but values within a given rank may vary widely. Generally speaking, coals with FSI values of 2 or less probably are not suitable for coke production and various users may require higher minimum FSI values for their specific equipment than others. Other tests that are used to predict the coking potential of a given coal include the Audibert-Arnu dilatometer, Gieseler plastometer, and Gray-King coke type, but the FSI is still the most commonly reported procedure of its type.

Ultimate Analysis: Ultimate analysis determines the percentages of the major constituent elements of coal. Determinations of hydrogen, carbon, nitrogen, oxygen, and total sulfur are reported. Typically, ultimate analyses are not performed on all coal samples but only on a representative number of samples. Data from ultimate analyses are used principally for research purposes and in certain classification systems, although there are commercial and industrial applications of the data. Specifically, ratios of carbon, hydrogen, and oxygen values are used to determine coal rank and as an aid in determining a coal’s suitability for coke manufacture, gasification, or liquefaction. Data on oxygen content also are used in calculating boiler efficiencies.

Nitrogen present in the coal may react to form ammonium compounds when coal is carbonized in the coking process. These compounds can be extracted and marketed as fertilizer or for use in the manufacture of nitric acid. Ammonium compounds are also formed in the gasification and liquefaction processes. Their formation, however, utilizes some of the available hydrogen that would otherwise be used in the formation of the more valuable hydrocarbon end products. Also, during coal combustion, nitrogen forms oxides which become atmospheric pollutants when released. For these reasons, low nitrogen contents are usually preferred in coal (Ward, 1984).

Other elements for which analyses are commonly sought include chlorine and phosphorus. Chlorine contributes to corrosion and fouling problems and possibly to atmospheric pollution. A knowledge of the chlorine content is also essential in determining other parameters, including total sulfur. Phosphorus, which is concentrated primarily in the mineral matter, is undesirable in coking coals because, like sulfur, it can contaminate the steel end product.

Trace Elements: Most coals contain a wide range of trace elements, some of which tend to concentrate in the organic faction of the coal, while others have an inorganic affinity and are concentrated in the mineral matter. In some cases, trace element suites are distinctive enough to serve as aids in seam correlation or as indicators of the depositional environment. Boron, in particular, is more strongly associated with coals formed under marine influences. Some trace elements may act as catalysts or inhibitors during the complex reactions involved in coal conversion and may be transferred to the end products of those processes. Trace elements may also be released to the environment through combustion or through the weathering of the coal ash. Not all of the elements released into the environment are harmful but concentrations of toxic elements such as lead, arsenic, cadmium, or mercury might preclude the use of certain coals rich in those elements. Alternatively, other trace elements may be considered as potentially marketable byproducts of coal utilization. A list of trace elements and their concentrations in coals from different coal regions in the United States and Australia is given in Table 2.6.3.

Application of Coal Petrology: Coal petrology is the study by direct examination, usually microscopically, of the organic and inorganic components of coal. Petrologic studies form the basis for a broad range of relatively new techniques which have technologic applications of importance to those involved in coal exploration. For a thorough treatment of the subject, see Bustin, et al. (1985) from which the following comments are condensed.

Coal is a heterogeneous substance that is composed of components analogous to the minerals that are the constituents of inorganically derived rocks. These components are termed macerals and differ widely in physical and chemical properties and in their response to different technological processes. A knowledge of the petrographic composition of a given coal bed will allow the explorationist to predict its behavior in certain applications.

One area of technological application of coal petrology is in the area of coal cleaning by float-sink separation. Microscopic observations of the degree of intergrowth of the organic and inorganic constituents in both the float and sink fractions will indicate whether crushing to a finer size will increase the clean coal yield. Also, observing the type, distribution, and degree of intergrowth of the sulfur will give a preliminary indication of the probable methods of cleaning.

Another area where petrographic techniques have come to play a key role is in the production of coke. Through the employment of these techniques, predictions can be made concerning a coal’s fluidity, FSI, and volatile matter content. Also, extensive research efforts through the years have shown that coal bed constituents can be classified as reactive or inert for given processes and that the information could be quantified to the point that the coal’s behavior can be predicted with some accuracy. Two salient concepts resulting from these research efforts concerning coke quality prediction are: (1) an optimum mix of reactive to inert components of a given rank of coal will produce the best coke and (2) the percentages of this optimum mix will vary with rank. The importance of petrographic analysis to the steel industry is best illustrated by the fact that most steel producers now routinely conduct petrographic analyses to monitor blend quality and to evaluate new coals.

Although nearly 90% of the coal consumed in North America is used for combustion with most of that amount used for the generation of electricity, petrology has not played a significant role in the identification of desirable combustion characteristics. The primary reason for this is that factors most significant in defining the suitability of a coal for combustion are either not directly measurable by petrographic techniques or they are more easily determined by other methods. Even so, some useful relationships have been identified through petrologic studies and it is an area of continuing research.

Finally, coal conversion technologies (primarily liquefaction) utilize petrographic data in identifying optimal coals for conversion. Rank and ratio of reactive to inert constituents are primary factors in determining a coal’s suitability for conversion.


source :http://books.smenet.org/Surf_Min_2ndEd/sm-ch05-sc02-ss00-bod.cfm