Репозиторий Sisyphus
Последнее обновление: 1 октября 2023 | Пакетов: 18631 | Посещений: 37804397
en ru br
Репозитории ALT
S:1.49-alt2
5.1: 1.47-alt1
www.altlinux.org/Changes

Группа :: Разработка/Perl
Пакет: perl-WordNet-QueryData

 Главная   Изменения   Спек   Патчи   Исходники   Загрузить   Gear   Bugs and FR  Repocop 

WordNet-QueryData-1.49/000075500000000000000000000000001147026557300147365ustar00rootroot00000000000000WordNet-QueryData-1.49/ChangeLog000064400000000000000000000363211147026557300165150ustar00rootroot000000000000002009-10-27  Jason Rennie

* README: update WNHOME/WNSEARCHDIR doc w/ Debian/Ubuntu tip

2009-03-20 Jason Rennie

* release 1.48
* fix handling of WNSEARCHDIR

2009-03-14 Danny Brian

* added the ability for new() to take a named param list
* added a new() param "noload" to not preload index files, but to
instead use Search::Dict lookups thereafter
* added _getIndexFH() and _getDataFH() to consolidate opening and
caching of filehandles
* added _dataLookup() to consolidate reads from data files
* added _indexLookup() to consolidate reads from index files
* added _indexOffsetLookup() to consolidate offset reads from index
files
* added _parseIndexLine() to consolidate the parsing of index file lines
* moved path data to new(), so that everything reads off of $self->{dir}
* removed the cntlinst path special-casing
* all file opens are deferred until necessary; for noload this means as
long as possible, for caching it means during the constructor (see
_get*FH() functions)
* documented "noload" option
* loop tests again for "noload"
* cleaned up formatting

2008-01-08 Jason Rennie

* release 1.47
* QueryData.pm: documentation fix

2007-05-06 Jason Rennie

* release 1.46
* QueryData.pm (version): removed (Invalid as of WN 3.0)
* test.pm: remove tests for pre-3.0 WN; update tests for WN 3.0
* README, QueryData.pm: update documentation of WN version to 3.0
* Makefile.PL: update default WN paths

2006-10-16 Jason Rennie

* release 1.45
* QueryData.pm (_initialize): move errorString and errorVal
initialization here (so that variables are specific to each
instantiation of the module)
* QueryData.pm (getResetError): return error information, reset
error variables
* QueryData.pm (offset): update to use error variables via $self
* test.pm: test error handling for offset function

2006-10-16 Jason Rennie

* release 1.44
* QueryData.pm: add "use vars" for new variables
* QueryData.pm (offset): fix syntax error

2006-10-15 Jason Rennie

* release 1.43
* QueryData.pm (offset): add error handling
* QueryData.pm ($errorString): new package variable
* QueryData.pm ($errorVal): new package variable

2006-10-05 Jason Rennie

* release 1.42

2006-10-05 Hugh S. Myers

* QueryData.pm (offset): return undef if anything is undefined

2005-12-30 Jason Rennie

* release 1.40

2005-12-30 Gregory Marton

* QueryData.pm (getWordPointers): $src is a hex, convert it to
decimal before trying to use it as an array index

2005-09-15 Jason Rennie

* release 1.39
* test.pl: remove old (1.6, 1.7, 1.71) tests
* test.pl: update tests for 2.1
* QueryData.pm (@indexFilePC,@dataFilePC): delete; no longer used (2.1)
* QueryData.pm (@indexFileUnix,@dataFileUnix): remove Unix suffix
* QueryData.pm (loadIndex,openData): update for new variable names
* QueryData.pm: update instances of WordNet 2.0 to 2.1
* QueryData.pm: update beginning of file w/ correct copyright, remove
my e-mail address
* QueryData.pm (%lexnames): new variable; encodes data from old
lexnames file
* QueryData.pm (_initialize): don't load lex names
* QueryData.pm (loadLexnames): removed; no longer useful
* QueryData.pm (lexname): use new %lexnames variable
* QueryData.pm ($lexnamesFile): deleted; no longer used

2005-09-14 Jason Rennie

* QueryData.pm (indexFilePC,dataFilePC): now considered to be
deprecated (since WordNet 2.1 uses unix file names exclusively
now)
* QueryData.pm: delete deprecated variables and functions
* QueryData.pm (%relation_sym): deleted
* QueryData.pm (remove_duplicates): deleted
* QueryData.pm (get_pointers): deleted
* QueryData.pm (get_all_words): deleted
* QueryData.pm (get_words): deleted
* QueryData.pm (query): deleted
* QueryData.pm (valid_forms): deleted
* QueryData.pm (list_all_words): deleted
* README: update to reflect known validForms bug
* README: update copyright dates
* QueryData.pm: update documentation to reflect known validForms bug
* QueryData.pm: update copyright dates

2005-09-14 Ben Haskell

* QueryData.pm: Note that many hypo/hyper-nym relationships have
been replaced by "instance of" and "has instance" relations; the
new querySense arguments are 'inst' and 'hasi'; 'hypes' returns
all hypernym and "instance of" relations; 'hypos' returns all
hyponyms and "has instance" relations

* QueryData.pm: update for WordNet 2.1: add new entries
(inst,hypes,hasi,hypos) in %relNameSym and relSymName

2004-11-10 Beata Klebanov

* QueryData.pm: Identified the following bug:
validForms("hating#v") returns ["hate#v", "hat#v"]; problem is
that only the first matching rule of detachment (with a
corresponding entry in Wordnet) should be used.
validForms/forms/_forms/tokenDetach code needs to be restructured
to reflect this.

2004-11-10 Beata Klebanov

* release 1.38
* QueryData.pm (tokenDetach): change match pattern from \w+ to .+;
old code wasn't matching words with hyphens! (e.g. go-karts)
* test.pl: added "go-karts" test

2004-11-10 Jason Michelizzi

* release 1.37
* QueryData.pm (frequency): new function
* QueryData.pm (documentation): add "frequency" entry
* test.pl: added tests for frequency()

2004-11-10 Jason Rennie

* release 1.36
* QueryData.pm (documentation): revise queryWord vs. querySense
documentation

2004-09-03 Jason Rennie

* README: add pointer to NMake information for Windows

2004-08-25 Jason Rennie

* release 1.35
* QueryData.pm (loadExclusions): if there are two entries for the
same word, append to the list created by earlier entries (thanks
for Jerry D. Hedden for spotting the bug (validForms("involucra")
didn't produce involucre#n due to later involcra->involcrum
exclusion entry)
* test.pl: add involucra->involucre test

2004-08-24 Jason Rennie

* release 1.34
* QueryData.pm (lexnamesFile): new top-level variable
* QueryData.pm (loadLexnames): new function
* QueryData.pm (lexname): new function
* QueryData.pm: add lexname documentation
* test.pl: add lexname tests
* README: remove e-mail addresses (to reduce spam)
* ChangeLog: remove e-mail addresses (to reduce spam)

2004-07-14 Jason Rennie

* release 1.33
* QueryData.pm: Update "LOCATING THE WORDNET DATABASE"
documentation (kudos to Jason R Michelizzi for pointing out that
this needed updating)

2003-10-08 Jason R Michelizzi

* release 1.31
* QueryData.pm: add new symbols to %relNameSym and %relSymName;
QueryData should work perfectly with WordNet 2.0 now.
Changed documentation to reflect this.
* test.pl: updated tests for 2.0

2003-09-25 Jason Rennie

* test.pl: move "cat#n" test to version-specific area (8 senses in 2.0)

2003-09-17 Jason R Michelizzi

* release 1.30
* QueryData.pm (queryWord): updates to fix handling of type (2) strings
* test.pl: updated queryWord tests

2003-09-08 Jason R Michelizzi

* release 1.29
* QueryData.pm (getWord): debug
* QueryData.pm (queryWord): debug
* test.pl: updated queryWord tests

2003-05-02 Siddharth A Patwardhan

* QueryData.pm (loadIndex): remember WordNet path

2003-04-03 Jason Rennie

* release 1.28

* QueryData.pm ($version): make into a package variable

* QueryData.pm (queryWord): warn if called; certain aspects of
queryWord do not work as they should; see commented out tests in
test.pl for examples of what doesn't work

2002-12-04 Jason Rennie

* release 1.27

* test.pl: add detachment tests; postpone queryWord tests

* QueryData.pm: add new detachment rules; warn that 1.6 and 1.7
are no longer officially supported

2002-08-06 Jason Rennie

* release 1.26
* QueryData.pm (wnPrefixUnix, wnPrefixPC): don't prepend WNHOME to
WNSEARCHDIR

2002-07-30 Jason Rennie

* release 1.25
* QueryData.pm (validForms): return forms for all POS's if no POS
passed
* QueryData.pm: only lowercase when necessary (index.* lookups)
* QueryData.pm (getAllSenses): return word straight from data.*
* QueryData.pm (getSense): return word straight from data.*
* QueryData.pm (querySense): type (1) & (2): use same capitalization
as query
* QueryData.pm (queryWord): type (1): use same capitalization as query
* QueryData.pm (getWordPointers): lower($word), ignore case in
comparison
* QueryData.pm (loadExclusions): don't lowercase words
* QueryData.pm (loadIndex): don't lowercase words
* QueryData.pm (forms): don't lowercase words
* QueryData.pm (delMarker): new function
* QueryData.pm (underscore): new function
* QueryData.pm (tagSenseCnt): new function
* QueryData.pm (loadIndex): store tagsense_cnt information
* Makefile.PL: update default directories to v1.7.1
* QueryData.pm (wnHomeUnix, wnHomePC): update to v1.7.1
* test.pl: update for v1.7.1
* test.pl: add tests for syntactic marker, capitalization, tagSenseCnt
* QueryData.pm (documentation): Use "type" to describe different
levels of query specificity

2002-06-27 Jason Rennie

* release 1.24
* test.pl: add test to check for hex parsing
* QueryData.pm (getWordPointers): bugfix: parse source/target as
hexidecimal (thanks to Peter Turney for spotting the bug)

2002-06-20 Jason Rennie

* release 1.23
* test.pl: add test to check for hex conversion
* QueryData.pm (getSensePointers): convert $st from hexidecimal
* QueryData.pm (getWordPointers): convert $tgt from hexidecimal

2002-06-12 Jason Rennie

* release 1.22
* Makefile.PL: die if WNHOME isn't set and the neither the windows
nor the unix default directory exists (so that CPAN tests will
die prematurely if WordNet is not installed---will prevent false
FAIL's)
* QueryData.pm (getWord): bugfix: convert $w_cnt from hexadecimal
(thanks to Peter Turney for spotting the bug)

2002-06-11 Jason Rennie

* release 1.21
* test.pl: make tests reflect changes
* QueryData.pm (_forms): return original word (in addition to morph
exclusion or rules of detachment forms)

2002-06-11 Jason Rennie

* release 1.20
* QueryData.pm (_forms): new function
* QueryData.pm (forms): check morph exclusions for tokens of
collocations; use a recursive organization.

2002-06-11 Jason Rennie

* release 1.19
* test.pl: add tests to cover changes
* QueryData.pm (tokenDetach): new function
* QueryData.pm (forms): if word is in morph exclusion table, return
that entry, otherwise use rules of detachment; don't check morph excl.
table for parts of collocations (may want to change this later)
* QueryData.pm (querySense): given word#pos query, use underscores for
spaces in any retured words
* QueryData.pm (lower): translate ' ' to '_'

2002-04-12 Jason Rennie

* release 1.18
* QueryData.pm (querySense): fix bug in "glos" lookup (no varaible
for pattern match, m//)
* test.pl: some tests still used old functions; use new ones

2002-04-07 Jason Rennie

* release 1.17
* QueryData.pm: update documentation
* test.pl: update to use new functions; increment counter to print
test numbers; only run relevant tests (use version)
* QueryData.pm (relSymName, relNameSym): new maps
* QueryData.pm (getSense, getWord, getSensePointers,
getWordPointers, querySense, queryWord): important new query
functions; distinguish between sense relations and word/lemma
relations
* QueryData.pm (removeDupliates, getAllSenses, validForms,
listAllWords): renamed functions
* QueryData.pm (remove_duplicates, get_all_words, valid_forms,
list_all_words, get_word, query, get_pointers): deprecated
functions
* QueryData.pm (lower): strip syntactic marker (if any)
* QueryData.pm (loadExclusions, loadIndex, openData): correctly
handle WNSEARCHDIR environment variable; use correct directory
separator (PC/Unix)
* QueryData.pm (version): new function

2002-04-05 Jason Rennie

* QueryData.pm (querySense, queryWord): new functions
* QueryData.pm (query): Deprecated. Use querySense instead.

2002-04-03 Jason Rennie

* README: add citation

2002-04-02 Jason Rennie

* QueryData.pm: revise man page
* test.pl: update numbering
* QueryData.pm (load_index): try both Unix and PC file names
* QueryData.pm (open_data): try both Unix and PC file names
* release 1.16

2002-04-01 Jason Rennie

* QueryData.pm (load_exclusions): words may have multiple listed
exclusions in the *.exc files; fix code to read in all of them;
bug reported by Bano
* release 1.15

2002-03-21 Jason Rennie

* QueryData.pm (offset): use 'defined' to check for good query string;
bug ("0#n#1") discovered by Bano
* release 1.14

2001-11-25 Jason Rennie

* QueryData.pm (get_word): eliminate syntactic marker (previously
fixed this in get_all_words); bug discovered by Satanjeev Banerjee
* test.pl: add syntactic marker check (authority#n#4)
* release 1.13

2001-11-22 Jason Rennie

* test.pl ($wnDir): use WNHOME environment variable
* test.pl: update tests for WordNet 1.7; identify which tests work
for 1.6 and which work for 1.7
* README (CUSTOM DIRECTORY): new section
* release 1.12

2001-11-22 Eric Joanis

* QueryData.pm (remove_duplicates): new function
* QueryData.pm (forms): use it

2001-09-12 John Asmuth

* QueryData.pm ($wordnet_dir): use WNHOME environment variable

2001-02-26 Luigi Bianchi

* README: added an installation procedure for Windows

2000-09-12 Jason Rennie

* QueryData.pm (list_all_words): new function
* test.pl: add test for list_all_words

2000-05-04 Jason Rennie

* QueryData.pm (_initialize, query): explicitly set value of input
record separator; restore old value before returning

2000-03-31 Eric Joanis

* QueryData.pm (%relation_sym): '#' is holonym symbol, '%' is
meronym symbol. Previously had this backwards.

2000-03-28 Eric Joanis

* QueryData.pm (%relation_sym): don't escape dollar-sign
* QueryData.pm (get_all_words): fix problems with "new to(p)"
by removing "(p)"

2000-03-22 Jason Rennie

* QueryData.pm (offset): new function
* test.pl: add test for offset

2000-03-21 Jason Rennie

* QueryData.pm (get_pointers, get_all_words): fix minor bug in
data file parsing

2000-03-10 Jason Rennie

* test.pl: Test for new glos code

2000-03-10 John Matiasek

* QueryData.pm: Allow access to glossary definitions (glos)

2000-02-22 Jason Rennie

* QueryData.pm: Rewrite documentation; disallow long relation names
* QueryData.pm (query): use single regex (like valid_forms);
clean up code a bit
* QueryData.pm (level): new function

2000-02-02 Keith J. Miller

* QueryData.pm (forms): make consistent with query syntax;
don't return immediately if we find a morph exclusion entry
* QueryData.pm (valid_forms): new function
* test.pl: make tests consistent with changes, add checks for
new function

1999-10-22 Jason Rennie

* QueryData.pm: update documentation to look a bit nicer

1999-09-15 Jason Rennie

* README: make specific to QueryData
* README: rewrite to correspond to Query
* test.pl: rename from my_test.pl
* my_test.pl: add test 12; test long POS names
* QueryData.pm (get_all_words): fix
* QueryData.pm: allow long relation names; allow long POS names;
check for illegal POS

1999-09-14 Jason Rennie

* my_test.pl: nice test suite for QueryData 1.2.
* QueryData.pm: first draft of direct access to WordNet data
files; 'new'ing is slow; about 15 seconds on my PII/400. Memory
consumption using WordNet 1.6 is appx. 16M. Still need to
integrate forms into query. query requires the word form to be
exactly like that in WordNet (although caplitalization may differ)

1999-09-13 Jason Rennie

* my_test.pl: test corpus for QueryData

1999-09-13 Jason Rennie

* QueryData.pm: access data files directly; us a more OO style of
coding; initialization (new) code is pretty much done; forms is
done
WordNet-QueryData-1.49/MANIFEST000064400000000000000000000002121147026557300160620ustar00rootroot00000000000000ChangeLog
Makefile.PL
README
test.pl
QueryData.pm
MANIFEST
META.yml Module meta-data (added by MakeMaker)
WordNet-QueryData-1.49/META.yml000064400000000000000000000004461147026557300162130ustar00rootroot00000000000000# http://module-build.sourceforge.net/META-spec.html
#XXXXXXX This is a prototype!!! It will change in the future!!! XXXXX#
name: WordNet-QueryData
version: 1.49
version_from:
installdirs: site
requires:

distribution_type: module
generated_by: ExtUtils::MakeMaker version 6.17
WordNet-QueryData-1.49/Makefile.PL000064400000000000000000000015161147026557300167130ustar00rootroot00000000000000use ExtUtils::MakeMaker;

# It is bad that the default WordNet directories are in two places,
# here and at the beginning of QueryData.pm ($wnHomeUnix and
# $wnHomePC). These need to be synchronized. I need to import those
# variables from QueryData.pm.

die "*** Please set the WNHOME environment variable to the location of your\n*** WordNet installation. QueryData.pm will not work otherwise.\n*** Alternatively, you can make the installation in the default\n*** location, C:\\Program Files\\WordNet\\3.0 on Windows, or /usr/local/WordNet-3.0 on unix.\n" unless exists $ENV{WNHOME} or exists $ENV{WNSEARCHDIR} or -d "C:\\Program Files\\WordNet\\3.0" or -d "/usr/local/WordNet-3.0";

WriteMakefile(
'dist' => { 'COMPRESS' => 'gzip', 'SUFFIX' => '.gz', },
'NAME' => 'WordNet::QueryData',
'VERSION' => '1.49',
);
WordNet-QueryData-1.49/QueryData.pm000064400000000000000000001205071147026557300172000ustar00rootroot00000000000000# -*- perl -*-
#
# Package to interface with WordNet (wn) database

# Run 'perldoc' on this file to produce documentation

# Copyright 1999-2006 Jason Rennie

# This module is free software; you can redistribute it and/or modify
# it under the same terms as Perl itself.

####### manual page & loadIndex ##########

# STANDARDS
# =========
# - upper case to distinguish words in function & variable names
# - use 'warn' to report warning & progress messages
# - begin 'warn' messages with "(fn)" where "fn" is function name
# - all non-trivial function calls should receive $self
# - syntactic markers are ignored

package WordNet::QueryData;

use strict;
use Carp;
use FileHandle;
use Search::Dict;
use File::Spec;
use Exporter;

##############################
# Environment/Initialization #
##############################

BEGIN {
use vars qw($VERSION @ISA @EXPORT @EXPORT_OK);
# List of classes from which we are inheriting methods
@ISA = qw(Exporter);
# Automatically loads these function names to be used without qualification
@EXPORT = qw();
# Allows these functions to be used without qualification
@EXPORT_OK = qw();
$VERSION = "1.49";
}

#############################
# Private Package Variables #
#############################

# Error variables
my $errorString = "";
my $errorVal = 0;

# Mapping of possible part of speech to single letter used by wordnet
my %pos_map = ('noun' => 'n',
'n' => 'n',
'1' => 'n',
'' => 'n',
'verb' => 'v',
'v' => 'v',
'2' => 'v',
'adjective' => 'a',
'adj' => 'a',
'a' => 'a',
# Adj satellite is essentially just an adjective
's' => 'a',
'3' => 'a',
'5' => 'a', # adj satellite
'adverb' => 'r',
'adv' => 'r',
'r' => 'r',
'4' => 'r');
# Mapping of possible part of speech to corresponding number
my %pos_num = ('noun' => '1',
'n' => '1',
'1' => '1',
'' => '1',
'verb' => '2',
'v' => '2',
'2' => '2',
'adjective' => '3',
'adj' => '3',
'a' => '3',
# Adj satellite is essentially just an adjective
's' => '3',
'3' => '3',
'adverb' => '4',
'adv' => '4',
'r' => '4',
'4' => '4');
# Mapping from WordNet symbols to short relation names
my %relNameSym = ('ants' => {'!'=>1},
'hype' => {'@'=>1},
'inst' => {'@i'=>1},
'hypes' => {'@'=>1,'@i'=>1},
'hypo' => {'~'=>1},
'hasi' => {'~i'=>1},
'hypos' => {'~'=>1,'~i'=>1},
'mmem' => {'%m'=>1},
'msub' => {'%s'=>1},
'mprt' => {'%p'=>1},
'mero' => {'%m'=>1, '%s'=>1, '%p'=>1},
'hmem' => {'#m'=>1},
'hsub' => {'#s'=>1},
'hprt' => {'#p'=>1},
'holo' => {'#m'=>1, '#s'=>1, '#p'=>1},
'attr' => {'='=>1},
'enta' => {'*'=>1},
'caus' => {'>'=>1},
'also' => {'^'=>1},
'vgrp' => {'$'=>1},
'sim' => {'&'=>1},
'part' => {'<'=>1},
'pert' => {'\\'=>1},
'deri' => {'+'=>1},
'domn' => {';c'=>1, ';r'=>1, ';u'=>1},
'dmnc' => {';c'=>1},
'dmnr' => {';r'=>1},
'dmnu' => {';u'=>1},
'domt' => {'-c'=>1, '-r'=>1, '-u'=>1},
'dmtc' => {'-c'=>1},
'dmtr' => {'-r'=>1},
'dmtu' => {'-u'=>1});

# Mapping from WordNet symbols to short relation names
my %relSymName = ('!' => 'ants',
'@' => 'hype',
'@i' => 'inst',
'~' => 'hypo',
'~i' => 'hasi',
'%m' => 'mmem',
'%s' => 'msub',
'%p' => 'mprt',
'#m' => 'hmem',
'#s' => 'hsub',
'#p' => 'hprt',
'=' => 'attr',
'*' => 'enta',
'>' => 'caus',
'^' => 'also',
'$' => 'vgrp', # '$' Hack to make font-lock work in emacs
'&' => 'sim',
'<' => 'part',
'\\' => 'pert',
'-u' => 'dmtu',
'-r' => 'dmtr',
'-c' => 'dmtc',
';u' => 'dmnu',
';r' => 'dmnr',
';c' => 'dmnc');

my %lexnames = ('00' => 'adj.all',
'01' => 'adj.pert',
'02' => 'adv.all',
'03' => 'noun.Tops',
'04' => 'noun.act',
'05' => 'noun.animal',
'06' => 'noun.artifact',
'07' => 'noun.attribute',
'08' => 'noun.body',
'09' => 'noun.cognition',
'10' => 'noun.communication',
'11' => 'noun.event',
'12' => 'noun.feeling',
'13' => 'noun.food',
'14' => 'noun.group',
'15' => 'noun.location',
'16' => 'noun.motive',
'17' => 'noun.object',
'18' => 'noun.person',
'19' => 'noun.phenomenon',
'20' => 'noun.plant',
'21' => 'noun.possession',
'22' => 'noun.process',
'23' => 'noun.quantity',
'24' => 'noun.relation',
'25' => 'noun.shape',
'26' => 'noun.state',
'27' => 'noun.substance',
'28' => 'noun.time',
'29' => 'verb.body',
'30' => 'verb.change',
'31' => 'verb.cognition',
'32' => 'verb.communication',
'33' => 'verb.competition',
'34' => 'verb.consumption',
'35' => 'verb.contact',
'36' => 'verb.creation',
'37' => 'verb.emotion',
'38' => 'verb.motion',
'39' => 'verb.perception',
'40' => 'verb.possession',
'41' => 'verb.social',
'42' => 'verb.stative',
'43' => 'verb.weather',
'44' => 'adj.ppl');

# WordNet data file names
my $lexnamesFile = "lexnames";
my @excFile = ("", "noun.exc", "verb.exc", "adj.exc", "adv.exc");
my @indexFile = ("", "index.noun", "index.verb", "index.adj", "index.adv");
my @dataFile = ("", "data.noun", "data.verb", "data.adj", "data.adv");

my $wnHomeUnix = defined($ENV{"WNHOME"}) ? $ENV{"WNHOME"} : "/usr/local/WordNet-3.0";
my $wnHomePC = defined($ENV{"WNHOME"}) ? $ENV{"WNHOME"} : "C:\\Program Files\\WordNet\\3.0";
my $wnPrefixUnix = defined($ENV{"WNSEARCHDIR"}) ? $ENV{"WNSEARCHDIR"} : "$wnHomeUnix/dict";
my $wnPrefixPC = defined($ENV{"WNSEARCHDIR"}) ? $ENV{"WNSEARCHDIR"} : "$wnHomePC\\dict";

END { } # module clean-up code here (global destructor)

###############
# Subroutines #
###############

# report WordNet version
# Invalid way of identifying version as of WordNet 3.0
#sub version { my $self = shift; return $self->{version}; }


sub getResetError#
{
my $self = shift;
my $tmpString = $self->{errorString};
my $tmpVal = $self->{errorVal};
$self->{errorString} = "";
$self->{errorVal} = 0;
return ($tmpString, $tmpVal);
}

# convert to lower case, translate ' ' to '_' and eliminate any
# syntactic marker
sub lower#
{
my $word = shift;
$word =~ tr/A-Z /a-z_/;
$word =~ s/\(.*\)$//;
return $word;
}

# translate ' ' to '_'
sub underscore#
{
$_[0] =~ tr/ /_/;
return $_[0];
}

# Eliminate any syntactic marker
sub delMarker#
{
$_[0] =~ s/\(.*\)$//;
return $_[0];
}

# Perform all initialization for new WordNet class instance
sub _initialize#
{
my $self = shift;
warn "Loading WordNet data...\n" if ($self->{verbose});
# Ensure that input record separator is "\n"
my $old_separator = $/;
$/ = "\n";

# Load morphology exclusion mapping, indexes, open data file handles
unless ($self->{noload}) {
$self->loadExclusions ();
}
$self->loadIndex ();
$self->openData ();

$self->{errorString} = "";
$self->{errorVal} = "";
warn "Done.\n" if ($self->{verbose});

# Return setting of input record separator
$/ = $old_separator;
}

sub new#
{
# First argument is class
my $class = shift;

my $self = {};
bless $self, $class;

# try to preserve old calling syntax, at least for dir
if (scalar @_ == 1) {
$self->{dir} = shift;
}
# but allow an extensible params syntax
else
{
my %params = @_;
$self->{dir} = $params{dir} if $params{dir};
$self->{verbose} = $params{verbose} if $params{verbose};
$self->{noload} = $params{noload} if $params{noload};
}

warn "Dir = ", $self->{dir}, "\n" if ($self->{verbose});
warn "Verbose = ", $self->{verbose}, "\n" if ($self->{verbose});
warn "Noload = ", $self->{noload}, "\n" if ($self->{verbose});

## set $self->{dir} here and avoid the confusion later on, and the {wnpath} stuff.
## also fix up path endings to have trailing slashes if they didn't come that way.
if (-e $wnPrefixUnix) {
$self->{dir} ||= $wnPrefixUnix;
$self->{dir} .= "/" if $self->{dir} !~ m|/$|;
} elsif (-e $wnPrefixPC) {
$self->{dir} ||= $wnPrefixPC;
$self->{dir} .= "\\" if $self->{dir} !~ m|\\$|;
}

$self->_initialize ();
return $self;
}

# Object destructor
sub DESTROY#
{
my $self = shift;

for (my $i=1; $i <= 4; $i++) {
undef $self->{data_fh}->[$i];
}
}

# Load mapping to non-standard canonical form of words (morphological
# exceptions)
sub loadExclusions#
{
my $self = shift;
warn "(loadExclusions)" if ($self->{verbose});

for (my $i=1; $i <= 4; $i++)
{
my $file = $self->{dir} . "$excFile[$i]";
my $fh = new FileHandle($file);
die "Not able to open $file: $!" if (!defined($fh));

while (my $line = <$fh>)
{
my ($exc, @word) = split(/\s+/, $line);
next if (!@word);
$self->{morph_exc}->[$i]->{$exc} ||= [];
push @{$self->{morph_exc}->[$i]->{$exc}}, @word;
}
}
}

sub loadIndex#
{
my $self = shift;
warn "(loadIndex)" if ($self->{verbose});

for (my $i=1; $i <= 4; $i++)
{
my $file = $self->{dir} . "$indexFile[$i]";
${$self->{indexFilePaths}}[$i] = $file;

if (!$self->{noload})
{
my $fh = $self->_getIndexFH($pos_num{$i});
my $line;
while ($line = <$fh>) {
$self->{version} = $1 if ($line =~ m/WordNet (\S+)/);
last if ($line =~ m/^\S/);
}
while (1) {
my ($lemma, $pos, $offsets, $sense_cnt, $p_cnt) = $self->_parseIndexLine($line);
$self->{"index"}->[$pos_num{$pos}]->{$lemma} = $offsets;
$self->{"tagsense_cnt"}->[$pos_num{$pos}]->{$lemma} = $sense_cnt;
$line = <$fh>;
last if (!$line);
}
warn "\n*** Version 1.6 of the WordNet database is no longer being supported as\n*** of QueryData 1.27. It may still work, but consider yourself warned.\n" if ($self->{version} eq "1.6");
warn "\n*** Version 1.7 of the WordNet database is no longer being supported as\n*** of QueryData 1.27. It may still work, but consider yourself warned.\n" if ($self->{version} eq "1.7");
}
}
}

# Open data files and return file handles
sub openData#
{
my $self = shift;
warn "(openData)" if ($self->{verbose});

for (my $i=1; $i <= 4; $i++)
{
my $file = $self->{dir} . "$dataFile[$i]";
${$self->{dataFilePaths}}[$i] = $file;
$self->_getDataFH($i);
}
}

# Remove duplicate values from an array, which must be passed as a
# reference to an array.
sub removeDuplicates
{
my ($self, $aref) = @_;
warn "(removeDupliates) array=", join(" ", @{$aref}), "\n"
if ($self->{verbose});

my $i = 0;
while ( $i < $#$aref ) {
if ( grep {$_ eq ${$aref}[$i]} @{$aref}[$i+1 .. $#$aref] ) {
# element at $i is duplicate--remove it
splice @$aref, $i, 1;
} else {
$i++;
}
}
}

# - transforms ending according to rules of detachment
# (http://www.cogsci.princeton.edu/~wn/doc/man1.7.1/morphy.htm).
# - assumes a single token (no collocations).
# - "#pos#sense" qualification NOT appended to returned words
# - always returns original word
sub tokenDetach#
{
my ($self, $string) = @_;
# The query string (word, pos and sense #)
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
warn "(forms) Sense number ignored\n" if (defined($sense));
die "(tokenDetach) bad part-of-speech: pos=$pos word=$word sense=$sense" if (!defined($pos) or !defined($pos_num{$pos}));
my @detach = ($word); # list of possible forms
if ($pos_num{$pos} == 1)
{
push @detach, $1 if ($word =~ m/^(.+)s$/);
push @detach, $1 if ($word =~ m/^(.+s)es$/);
push @detach, $1 if ($word =~ m/^(.+x)es$/);
push @detach, $1 if ($word =~ m/^(.+z)es$/);
push @detach, $1 if ($word =~ m/^(.+ch)es$/);
push @detach, $1 if ($word =~ m/^(.+sh)es$/);
push @detach, $1."man" if ($word =~ m/^(.+)men$/);
push @detach, $1."y" if ($word =~ m/^(.+)ies$/);
}
elsif ($pos_num{$pos} == 2)
{
push @detach, $1 if ($word =~ m/^(.+)s$/);
push @detach, $1."y" if ($word =~ m/^(.+)ies$/);
push @detach, $1 if ($word =~ m/^(.+e)s$/);
push @detach, $1 if ($word =~ m/^(.+)es$/);
push @detach, $1 if ($word =~ m/^(.+e)d$/);
push @detach, $1 if ($word =~ m/^(.+)ed$/);
push @detach, $1."e" if ($word =~ m/^(.+)ing$/);
push @detach, $1 if ($word =~ m/^(.+)ing$/);
}
elsif ($pos_num{$pos} == 3)
{
push @detach, $1 if ($word =~ m/^(.+)er$/);
push @detach, $1 if ($word =~ m/^(.+)est$/);
push @detach, $1 if ($word =~ m/^(.+e)r$/);
push @detach, $1 if ($word =~ m/^(.+e)st$/);
}
$self->removeDuplicates(\@detach);
return @detach;
}

# sub-function of forms; do not use unless you know what you're doing
sub _forms#
{
# Assume that word is canonicalized, pos is number
my ($self, $word, $pos) = @_;

my $lword = lower($word);
warn "(_forms) WORD=$word POS=$pos\n" if ($self->{verbose});
# if word is in morph exclusion table, return that entry
if ($self->{noload}) {
# for noload, only load exclusions when needed; we do cache these
# though because the list is short (40k) and used on repeated recursive
# calls.
if (! exists $self->{morph_exc}) {
$self->loadExclusions();
}
}
if (defined ($self->{morph_exc}->[$pos]->{$lword})) {
return ($word, @{$self->{morph_exc}->[$pos]->{$lword}});
}

my @token = split (/[ _]/, $word);
# If there is only one token, process via rules of detachment
return tokenDetach ($self, $token[0]."#".$pos) if (@token == 1);
# Otherwise, process each token individually, then string together colloc's
my @forms;
for (my $i=0; $i < @token; $i++) {
push @{$forms[$i]}, _forms ($self, $token[$i], $pos);
}

# Generate all possible token sequences (collocations)
my @rtn;
my @index;
for (my $i=0; $i < @token; $i++) { $index[$i] = 0; }
while (1) {
# String together one sequence of possibilities
my $colloc = $forms[0]->[$index[0]];
for (my $i=1; $i < @token; $i++) {
$colloc .= "_".$forms[$i]->[$index[$i]];
}
push @rtn, $colloc;
# think "adder" (computer architechture)
my $i;
for ($i=0; $i < @token; $i++) {
last if (++$index[$i] < @{$forms[$i]});
$index[$i] = 0;
}
# If we had to reset every index, we're done
last if ($i >= @token);
}
return @rtn;
}

# Generate list of all possible forms of how word may be found in WordNet
sub forms#
{
my ($self, $string) = @_;
# The query string (word, pos and sense #)
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
warn "(forms) Sense number ignored\n" if (defined($sense));
warn "(forms) WORD=$word POS=$pos\n" if ($self->{verbose});
die "(forms) Bad part-of-speech: pos=$pos" if (!defined($pos) or !defined($pos_num{$pos}));
my @rtn = _forms ($self, $word, $pos_num{$pos});
for (my $i=0; $i < @rtn; ++$i) {
$rtn[$i] .= "\#$pos";
}
return @rtn;
}


# $line is line from data file; $ptr is a reference to a hash of
# symbols; returns list of word#pos#sense strings
sub getSensePointers#
{
my ($self, $line, $ptr) = @_;
warn "(getSensePointers) ptr=", keys(%{$ptr}), " line=\"$line\"\n"
if ($self->{verbose});

my (@rtn, $w_cnt);
# $w_cnt is hexadecimal
(undef, undef, undef, $w_cnt, $line) = split (/\s+/, $line, 5);
$w_cnt = hex ($w_cnt);
for (my $i=0; $i < $w_cnt; ++$i) {
(undef, undef, $line) = split(/\s+/, $line, 3);
}
my $p_cnt;
($p_cnt, $line) = split(/\s+/, $line, 2);
for (my $i=0; $i < $p_cnt; ++$i) {
my ($sym, $offset, $pos, $st);
# $st "source/target" is 2-part hexadecimal
($sym, $offset, $pos, $st, $line) = split(/\s+/, $line, 5);
push @rtn, $self->getSense($offset, $pos)
if (hex($st)==0 and defined($ptr->{$sym}));
}
return @rtn;
}

# $line is line from data file; $ptr is a reference to a hash of
# symbols; $word is query word/lemma; returns list of word#pos#sense strings
sub getWordPointers#
{
my ($self, $line, $ptr, $word) = @_;
warn "(getWordPointers) ptr=", keys(%{$ptr}), " word=$word line=\"$line\"\n"
if ($self->{verbose});

my $lword = lower($word);
my (@rtn, $w_cnt);
(undef, undef, undef, $w_cnt, $line) = split (/\s+/, $line, 5);
$w_cnt = hex ($w_cnt);
my @word;
for (my $i=0; $i < $w_cnt; ++$i) {
($word[$i], undef, $line) = split(/\s+/, $line, 3);
}
my $p_cnt;
($p_cnt, $line) = split(/\s+/, $line, 2);
for (my $i=0; $i < $p_cnt; ++$i) {
my ($sym, $offset, $pos, $st);
# $st "source/target" is 2-part hexadecimal
($sym, $offset, $pos, $st, $line) = split(/\s+/, $line, 5);
next if (!$st);
my ($src, $tgt) = ($st =~ m/([0-9a-f]{2})([0-9a-f]{2})/);
push @rtn, $self->getWord($offset, $pos, hex($tgt))
if (defined($ptr->{$sym}) and ($word[hex($src)-1] =~ m/$lword/i));
}
return @rtn;
}

# return list of word#pos#sense for $offset and $pos (synset)
sub getAllSenses#
{
my ($self, $offset, $pos) = @_;
warn "(getAllSenses) offset=$offset pos=$pos\n" if ($self->{verbose});

my @rtn;
my $line = $self->_dataLookup($pos, $offset);
my $w_cnt;
(undef, undef, undef, $w_cnt, $line) = split(/\s+/, $line, 5);
$w_cnt = hex ($w_cnt);
my @words;
for (my $i=0; $i < $w_cnt; ++$i) {
($words[$i], undef, $line) = split(/\s+/, $line, 3);
}
foreach my $word (@words) {
$word = delMarker($word);
my $lword = lower ($word);
my @offArr = $self->_indexOffsetLookup($lword, $pos);
for (my $i=0; $i < @offArr; $i++) {
if ($offArr[$i] == $offset) {
push @rtn, "$word\#$pos\#".($i+1);
last;
}
}
}
return @rtn;
}

# returns word#pos#sense for given offset and pos
sub getSense#
{
my ($self, $offset, $pos) = @_;
warn "(getSense) offset=$offset pos=$pos\n" if ($self->{verbose});

my $line = $self->_dataLookup($pos, $offset);

my ($lexfn,$word);
(undef, $lexfn, undef, undef, $word, $line) = split (/\s+/, $line, 6);
$word = delMarker($word);
my $lword = lower($word);

my @offArr = $self->_indexOffsetLookup($word, $pos);
for (my $i=0; $i < @offArr; $i++) {
return "$word\#$pos\#".($i+1) if ($offArr[$i] == $offset);
}
die "(getSense) Internal error: offset=$offset pos=$pos";
}

sub _getIndexFH {
my $self = shift;
my $pos = shift;
my $fh = $self->{index_fh}->[$pos_num{$pos}] ||=
FileHandle->new ( ${$self->{indexFilePaths}}[$pos_num{$pos}] );
unless ($fh) {
die "Couldn't open index file: " . ${$self->{indexFilePaths}}[$pos_num{$pos}];
}
return $fh;
}

sub _getDataFH {
my $self = shift;
my $pos = shift;
my $fh = $self->{data_fh}->[$pos_num{$pos}] ||=
FileHandle->new ( ${$self->{dataFilePaths}}[$pos_num{$pos}] );
unless ($fh) {
die "Couldn't open data file: " . ${$self->{indexFilePaths}}[$pos_num{$pos}];
}
return $fh;
}

## returns the offset(s) given word, pos, and sense
sub _indexOffsetLookup {
my $self = shift;
my ($word, $pos, $sense) = @_;
my $lword = lower ($word);
# print STDERR "(_indexOffsetLookup) $word $pos $sense\n";
if ($sense) {
my $offset;
if ($self->{noload}) {
my $line = $self->_indexLookup($pos, $lword);
my ($lemma, $pos, $offsets, $sense_cnt, $p_cnt) = $self->_parseIndexLine($line);
$offset = $$offsets[$sense - 1] if ($lemma eq $lword); ## remember that look always succeeds
}
else
{
$offset = (unpack "i*", $self->{"index"}->[$pos_num{$pos}]->{$lword})[$sense-1]
if (exists $self->{"index"}->[$pos_num{$pos}]->{$lword});
}
return $offset;
}
else
{
my @offsets = ();
if ($self->{noload}) {
my $line = $self->_indexLookup($pos, $lword);
my ($lemma, $pos, $offsets, $sense_cnt, $p_cnt) = $self->_parseIndexLine($line);
@offsets = @$offsets if ($lemma eq $lword);
}
else
{
if (defined($self->{"index"}->[$pos_num{$pos}]->{$lword})) {
@offsets = (unpack "i*", $self->{"index"}->[$pos_num{$pos}]->{$lword});
}
}
return @offsets;
}
}

## returns line from index file
sub _indexLookup {
my $self = shift;
my ($pos, $word) = @_;
my $fh = $self->_getIndexFH($pos);
look($fh, $word, 0);
my $line = <$fh>;
return $line;
}

## returns line from data file
sub _dataLookup {
my $self = shift;
my ($pos, $offset) = @_;
my $fh = $self->_getDataFH($pos);
seek($fh, $offset, 0);
my $line = <$fh>;
return $line;
}

# returns word#pos#sense for given offset, pos and number
sub getWord#
{
my ($self, $offset, $pos, $num) = @_;
warn "(getWord) offset=$offset pos=$pos num=$num" if ($self->{verbose});

my $fh = $self->_getDataFH($pos);
seek $fh, $offset, 0;
my $line = <$fh>;
my $w_cnt;
(undef, undef, undef, $w_cnt, $line) = split (/\s+/, $line, 5);
$w_cnt = hex ($w_cnt);
my $word;
for (my $i=0; $i < $w_cnt; ++$i) {
($word, undef, $line) = split(/\s+/, $line, 3);
$word = delMarker($word);
# (mich0212) return "$word\#$pos" if ($i+1 == $num);
last if ($i+1 == $num);
}
my $lword = lower($word);
my @offArr = $self->_indexOffsetLookup($lword, $pos);;
for (my $i=0; $i < @offArr; $i++) {
return "$word\#$pos\#".($i+1) if ($offArr[$i] == $offset);
}
die "(getWord) Bad number: offset=$offset pos=$pos num=$num";
}


#sub offset#
#{
# my ($self, $string) = @_;
#
# my ($word, $pos, $sense)
# = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
# warn "(offset) WORD=$word POS=$pos SENSE=$sense\n"
# if ($self->{verbose});
# die "(offset) Bad query string: $string"
# if (!defined($sense)
# or !defined($pos)
# or !defined($word)
# or !defined($pos_num{$pos}));
# my $lword = lower ($word);
# return (unpack "i*", $self->{"index"}->[$pos_num{$pos}]->{$lword})[$sense-1];
#}

# Return the WordNet data file offset for a fully qualified word sense
sub offset#
{
my ($self, $string) = @_;

my ($word, $pos, $sense)
= $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
warn "(offset) WORD=$word POS=$pos SENSE=$sense\n"
if ($self->{verbose});

if (!defined($sense)
or !defined($pos)
or !defined($word)
or !defined($pos_num{$pos})) {
$self->{errorVal} = 1;
$self->{errorString} = "One or more bogus arguments: offset($word,$pos,$sense)";
return;#die "(offset) Bad query string: $string";
}

my $lword = lower($word);
my $res = $self->_indexOffsetLookup($lword, $pos, $sense);

return $res if $res;

$self->{errorVal} = 2;
$self->{errorString} = "Index not initialized properly or `$word' not found in index";
return;
}

# Return the lexname for the type (3) query string
sub lexname#
{
my ($self, $string) = @_;

my $offset = $self->offset($string);
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
warn "(lexname) word=$word pos=$pos sense=$sense offset=$offset\n" if ($self->{verbose});
my $line = $self->_dataLookup($pos, $offset);
my (undef, $lexfn, undef) = split (/\s+/, $line, 3);
return $lexnames{$lexfn};
}

# Return the frequency count for the type (3) query string
# Added by mich0212 (12/1/04)
sub frequency
{
my ($self, $string) = @_;
my ($word, $pos, $sense) = $string =~ /^([^\#]+)\#([^\#]+)\#([^\#]+)$/;

unless (defined $word and defined $pos and defined $sense) {
croak "(frequency) Query string is not a valid type (3) string";
}

warn "(frequency) word=$word pos=$pos sense=$sense\n" if $self->{verbose};

my $cntfile = File::Spec->catfile ( $self->{dir} . 'cntlist.rev');
open CFH, "<$cntfile" or die "Cannot open $cntfile: $!";

# look() seek()s to the right position in the file
my $position = Search::Dict::look (*CFH, "$word\%", 0, 0);
while (<CFH>) {
if (/^$word\%(\d+):[^ ]+ (\d+) (\d+)/) {
next unless $pos_map{$1} eq $pos;
next unless $2 eq $sense;
close CFH;
return $3;
}
else {
last;
}
}
close CFH;
return 0;
}

sub querySense#
{
my $self = shift;
my $string = shift;

warn "(querySense) STRING=$string" if $self->{verbose};

# Ensure that input record separator is "\n"
my $old_separator = $/;
$/ = "\n";
my @rtn;

# get word, pos, and sense from second argument:
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
die "(querySense) Bad query string: $string" if (!defined($word));
my $lword = lower ($word);
die "(querySense) Bad part-of-speech: $string" if (defined($pos) && !$pos_num{$pos});

if (defined($sense)) {
my $rel = shift;
warn "(querySense) WORD=$word POS=$pos SENSE=$sense RELATION=$rel\n" if ($self->{verbose});
die "(querySense) Relation required: $string" if (!defined($rel));
die "(querySense) Bad relation: $rel"
if (!defined($relNameSym{$rel}) and !defined($relSymName{$rel})
and ($rel ne "glos") and ($rel ne "syns"));
$rel = $relSymName{$rel} if (defined($relSymName{$rel}));

my $offset = $self->_indexOffsetLookup($lword, $pos, $sense);
my $line = $self->_dataLookup($pos, $offset);

if (!$line) {
die "Line not found for offset $offset!";
}

if ($rel eq "glos") {
$line =~ m/.*\|\s*(.*)$/;
$rtn[0] = $1;
} elsif ($rel eq "syns") {
@rtn = $self->getAllSenses ($offset, $pos);
} else {
@rtn = $self->getSensePointers($line, $relNameSym{$rel});
}
}
elsif (defined($pos)) {
warn "(querySense) WORD=$word POS=$pos\n" if ($self->{verbose});
my @offsets = $self->_indexOffsetLookup($lword, $pos);
$word = underscore(delMarker($word));
for (my $i=0; $i < @offsets; $i++) {
push @rtn, "$word\#$pos\#".($i+1);
}
}
elsif (defined($word)) {
warn "(querySense) WORD=$word\n" if ($self->{verbose});
$word = underscore(delMarker($word));
for (my $i=1; $i <= 4; $i++) {
my ($offset) = $self->_indexOffsetLookup($lword, $i);
push @rtn, "$word\#".$pos_map{$i} if $offset;
}
}
else
{
warn "(querySense) no results being returned" if $self->{verbose};
}
# Return setting of input record separator
$/ = $old_separator;
return @rtn;
}

sub queryWord#
{
my $self = shift;
my $string = shift;

# (mich0212) warn "queryWord: WARNING: certain aspects of this function are broken. It needs\n a rewrite. Use at your own risk.\n";

# Ensure that input record separator is "\n"
my $old_separator = $/;
$/ = "\n";
my @rtn;

# get word, pos, and sense from second argument:
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
die "(queryWord) Bad query string: $string" if (!defined($word));
my $lword = lower ($word);
die "(queryWord) Bad part-of-speech: $string" if (defined($pos) && !$pos_num{$pos});

if (defined($sense)) {
my $rel = shift;
warn "(queryWord) WORD=$word POS=$pos SENSE=$sense RELATION=$rel\n"
if ($self->{verbose});
die "(queryWord) Relation required: $string" if (!defined($rel));
die "(queryWord) Bad relation: $rel"
if ((!defined($relNameSym{$rel}) and !defined($relSymName{$rel})));
$rel = $relSymName{$rel} if (defined($relSymName{$rel}));

my $offset = $self->_indexOffsetLookup($lword, $pos, $sense);
my $line = $self->_dataLookup($pos, $offset);
push @rtn, $self->getWordPointers($line, $relNameSym{$rel}, $word);
}
elsif (defined($pos))
{
warn "(queryWord) WORD=$word POS=$pos\n" if ($self->{verbose});
my @offsets = $self->_indexOffsetLookup($lword, $pos);
$word = underscore(delMarker($word));
for (my $i=0; $i < @offsets; $i++) {
push @rtn, "$word\#$pos\#".($i+1);
}
}
else
{
print STDERR "(queryWord) WORD=$word\n" if ($self->{verbose});

$word = underscore(delMarker($word));
for (my $i=1; $i <= 4; $i++) {
my $offset = $self->_indexOffsetLookup($lword, $i);
push @rtn, "$word\#".$pos_map{$i} if $offset;
}
}
# Return setting of input record separator
$/ = $old_separator;
return @rtn;
}

# return list of entries in wordnet database (in word#pos form)
sub validForms#
{
my ($self, $string) = @_;
my (@possible_forms, @valid_forms);

# get word, pos, and sense from second argument:
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
warn "(valid_forms) Sense number ignored: $string\n" if (defined $sense);
if (!defined($pos)) {
my @rtn;
push @rtn, $self->validForms($string."#n");
push @rtn, $self->validForms($string."#v");
push @rtn, $self->validForms($string."#a");
push @rtn, $self->validForms($string."#r");
return @rtn;
}

die "(valid_forms) Invalid part-of-speech: $pos" if (!defined($pos_map{$pos}));
@possible_forms = $self->forms ("$word#$pos");
@valid_forms = grep $self->querySense ($_), @possible_forms;

return @valid_forms;
}

sub _parseIndexLine {
my $self = shift;
my $line = shift;
my ($lemma, $pos, $sense_cnt, $p_cnt, $rline) = split(/\s+/, $line, 5);
for (my $i=0; $i < $p_cnt; ++$i) {
(undef, $rline) = split(/\s+/, $rline, 2);
}
my (undef, $tagsense_cnt, @offsets) = split(/\s+/, $rline);
## return offset list packed if caching, otherwise just use an array ref
if ($self->{noload}) {
return ($lemma, $pos, \@offsets, $tagsense_cnt);
}
else
{
return ($lemma, $pos, (pack "i*", @offsets), $tagsense_cnt);
}
}

# List all words in WordNet database of a particular part of speech
sub listAllWords#
{
my ($self, $pos) = @_;
if ($self->{noload}) {
my @words;
my $fh = $self->_getIndexFH($pos);
seek($fh, 0, 0);
for my $line (<$fh>) {
next if ($line =~ m/^\s/);
my ($lemma, @rest) = $self->_parseIndexLine($line);
push @words, $lemma;
}
return @words;
}
else
{
return keys(%{$self->{"index"}->[$pos_num{$pos}]});
}
}

# Return length of (some) path to root, plus one (root is considered
# to be level 1); $word must be word#pos#sense form
sub level#
{
my ($self, $word) = @_;
my $level;

for ($level=0; $word; ++$level)
{
($word) = $self->querySense ($word, "hype");
}
return $level;
}

sub tagSenseCnt
{
my ($self, $string) = @_;
# get word, pos, and sense from second argument:
my ($word, $pos, $sense) = $string =~ /^([^\#]+)(?:\#([^\#]+)(?:\#(\d+))?)?$/;
warn "(tagSenseCnt) Ignorning sense: $string" if (defined($sense));
die "Word and part-of-speech required word=$word pos=$pos" if (!defined($word) or !defined($pos) or !defined($pos_num{$pos}));
my $lword = lower($word);
return $self->_getTagSenseCnt($lword, $pos);
}

sub dataPath {
my $self = shift;
return $self->{dir};
}

sub _getTagSenseCnt {
my $self = shift;
my ($lword, $pos) = @_;
if ($self->{noload}) {
my $line = $self->_indexLookup($pos, $lword);
my ($lemma, $pos, $offsets, $tagsense_cnt) = $self->_parseIndexLine($line);
return $tagsense_cnt if ($lemma eq $lword);
}
else
{
return $self->{"tagsense_cnt"}->[$pos_num{$pos}]->{$lword};
}
}

# module must return true
1;
__END__

#################
# Documentation #
#################

=head1 NAME

WordNet::QueryData - direct perl interface to WordNet database

=head1 SYNOPSIS

use WordNet::QueryData;

my $wn = WordNet::QueryData->new( noload => 1);

print "Synset: ", join(", ", $wn->querySense("cat#n#7", "syns")), "\n";
print "Hyponyms: ", join(", ", $wn->querySense("cat#n#1", "hypo")), "\n";
print "Parts of Speech: ", join(", ", $wn->querySense("run")), "\n";
print "Senses: ", join(", ", $wn->querySense("run#v")), "\n";
print "Forms: ", join(", ", $wn->validForms("lay down#v")), "\n";
print "Noun count: ", scalar($wn->listAllWords("noun")), "\n";
print "Antonyms: ", join(", ", $wn->queryWord("dark#n#1", "ants")), "\n";

=head1 DESCRIPTION

WordNet::QueryData provides a direct interface to the WordNet database
files. It requires the WordNet package
(http://www.cogsci.princeton.edu/~wn/). It allows the user direct
access to the full WordNet semantic lexicon. All parts of speech are
supported and access is generally very efficient because the index and
morphical exclusion tables are loaded at initialization. The module can
optionally be used to load the indexes into memory for extra-fast lookups.

=head1 USAGE

=head2 LOCATING THE WORDNET DATABASE

To use QueryData, you must tell it where your WordNet database is.
There are two ways you can do this: 1) by setting the appropriate
environment variables, or 2) by passing the location to QueryData when
you invoke the "new" function.

QueryData knows about two environment variables, WNHOME and
WNSEARCHDIR. If WNSEARCHDIR is set, QueryData looks for WordNet data
files there. Otherwise, QueryData looks for WordNet data files in
WNHOME/dict (WNHOME\dict on a PC). If WNHOME is not set, it defaults
to "/usr/local/WordNet-3.0" on Unix and "C:\Program Files\WordNet\3.0"
on a PC. Normally, all you have to do is to set the WNHOME variable
to the location where you unpacked your WordNet distribution. The
database files are normally unpacked to the "dict" subdirectory.

You can also pass the location of the database files directly to
QueryData. To do this, pass the location to "new":

my $wn = WordNet::QueryData->new("/usr/local/wordnet/dict");

You can instead call the constructor with a hash of params, as in:

my $wn = WordNet::QueryData->new(
dir => "/usr/local/wordnet/dict",
verbose => 0,
noload => 1
);

When calling "new" in this fashion, two additional arguments are
supported; "verbose" will output debugging information, and "noload"
will cause the object to *not* load the indexes at startup.

=head2 CACHING VERSUS NOLOAD

The "noload" option results in data being retrieved using a
dictionary lookup rather than caching the indexes in RAM.
This method yields an immediate startup time but *slightly* (though
less than you might think) longer lookup time. For the curious, here
are some profile data for each method on a duo core intel mac, averaged
seconds over 10000 iterations:

=head3 Caching versus noload times in seconds

noload => 1 noload => 0
------------------------------------------------------------------
new() 0.00001 2.55
queryWord("descending") 0.0009 0.0001
querySense("sunset#n#1", "hype") 0.0007 0.0001
validForms ("lay down#2") 0.0004 0.0001

Obviously the new() comparison is not very useful, because nothing is
happening with the constructor in the case of noload => 1. Similarly,
lookups with caching are basically just hash lookups, and therefore very
fast. The lookup times for noload => 1 illustrate the tradeoff between
caching at new() time and using dictionary lookups.

Because of the lookup speed increase when noload => 0, many users will
find it useful to set noload to 1 during development cycles, and to 0
when RAM is less of a concern than speed. The bottom line is that
noload => 1 saves you over 2 seconds of startup time, and costs you about
0.0005 seconds per lookup.

=head2 QUERYING THE DATABASE

There are two primary query functions, 'querySense' and 'queryWord'.
querySense accesses semantic (sense to sense) relations; queryWord
accesses lexical (word to word) relations. The majority of relations
are semantic. Some relations, including "also see", antonym,
pertainym, "participle of verb", and derived forms are lexical.
See the following WordNet documentation for additional information:

http://wordnet.princeton.edu/man/wninput.5WN#sect3

Both functions take as their first argument a query string that takes
one of three types:

(1) word (e.g. "dog")
(2) word#pos (e.g. "house#n")
(3) word#pos#sense (e.g. "ghostly#a#1")

Types (1) or (2) passed to querySense or queryWord will return a list
of possible query strings at the next level of specificity. When type
(3) is passed to querySense or queryWord, it requires a second
argument, a relation. Relations generally only work with one function
or the other, though some relations can be either semantic or lexical;
hence they may work for both functions. Below is a list of known
relations, grouped according to the function they're most likely to
work with:

queryWord
---------
also - also see
ants - antonyms
deri - derived forms (nouns and verbs only)
part - participle of verb (adjectives only)
pert - pertainym (pertains to noun) (adjectives only)
vgrp - verb group (verbs only)

querySense
----------
also - also see
glos - word definition
syns - synset words
hype - hypernyms
inst - instance of
hypes - hypernyms and "instance of"
hypo - hyponyms
hasi - has instance
hypos - hyponums and "has instance"
mmem - member meronyms
msub - substance meronyms
mprt - part meronyms
mero - all meronyms
hmem - member holonyms
hsub - substance holonyms
hprt - part holonyms
holo - all holonyms
attr - attributes (?)
sim - similar to (adjectives only)
enta - entailment (verbs only)
caus - cause (verbs only)
domn - domain - all
dmnc - domain - category
dmnu - domain - usage
dmnr - domain - region
domt - member of domain - all (nouns only)
dmtc - member of domain - category (nouns only)
dmtu - member of domain - usage (nouns only)
dmtr - member of domain - region (nouns only)

When called in this manner, querySense and queryWord will return a
list of related words/senses. Note that as of WordNet 2.1, many
hypernyms have become "instance of" and many hyponyms have become "has
instance."

Note that querySense and queryWord use type (3) query strings in
different ways. A type (3) string passed to querySense specifies a
synset. A type (3) string passed to queryWord specifies a specific
sense of a specific word.

=head2 OTHER FUNCTIONS

"validForms" accepts a type (1) or (2) query string. It returns a
list of all alternate forms (alternate spellings, conjugations,
plural/singular forms, etc.). The type (1) query returns alternates
for all parts of speech (noun, verb, adjective, adverb). WARNING:
Only the first argument returned by validForms is certain to be valid
(i.e. recognized by WordNet). Remaining arguments may not be valid.

"listAllWords" accepts a part of speech and returns the full list of
words in the WordNet database for that part of speech.

"level" accepts a type (3) query string and returns a distance (not
necessarily the shortest or longest) to the root in the hypernym
directed acyclic graph.

"offset" accepts a type (3) query string and returns the binary offset of
that sense's location in the corresponding data file.

"tagSenseCnt" accepts a type (2) query string and returns the tagsense_cnt
value for that lemma: "number of senses of lemma that are ranked
according to their frequency of occurrence in semantic concordance
texts."

"lexname" accepts a type (3) query string and returns the lexname of
the sense; see WordNet lexnames man page for more information.

"frequency" accepts a type (3) query string and returns the frequency
count of the sense from tagged text; see WordNet cntlist man page
for more information.

See test.pl for additional example usage.

=head1 NOTES

Requires access to WordNet database files (data.noun/noun.dat,
index.noun/noun.idx, etc.)

=head1 COPYRIGHT

Copyright 2000-2005 Jason Rennie. All rights reserved.

This module is free software; you can redistribute it and/or modify
it under the same terms as Perl itself.

=head1 SEE ALSO

perl(1)

http://wordnet.princeton.edu/

http://people.csail.mit.edu/~jrennie/WordNet/

=cut
WordNet-QueryData-1.49/README000064400000000000000000000063161147026557300156240ustar00rootroot00000000000000WordNet::QueryData perl module
------------------------------

WordNet::QueryData provides a direct interface to the WordNet database
files. It requires the WordNet package
(http://wordnet.princeton.edu/). It allows the user direct access to
the full WordNet semantic lexicon. All parts of speech are supported
and access is generally very efficient because the index and morphical
exclusion tables are loaded at initialization. This initialization
step is slow (appx. 10-15 seconds), but queries are very fast
thereafter---thousands of queries can be completed every second.

PREREQUISITES
-------------

- Perl5
- WordNet Database Package version 3.0

DOCUMENTATION
-------------

Make sure to read the included man page ("perldoc QueryData.pm" or
"perldoc WordNet::QueryData" to extract).

The ChangeLog file lists a summary of changes to the code.

See http://groups.google.com/group/wn-perl for information on the mailing list.

WINDOWS INSTALLATION
--------------------

This assumes that perl was installed to the default location (C:\perl).

0) Make sure that you have installed WordNet to C:\Program Files\WordNet\3.0
1) Unpack the WordNet QueryData distribution
2) Create the directory C:\perl\site\lib\WordNet
3) Copy QueryData.pm to C:\perl\site\lib\WordNet
4) Run "perl test.pl" to test the installation

Alternatively, you can install NMake and use the Make installation steps.
See http://johnbokma.com/perl/make-for-windows.html for info on NMake.

MAKE INSTALLATION
-----------------

Installation uses the perl MakeMaker utility ('perldoc
ExtUtils::MakeMaker'). To build and test the distribution do:

perl Makefile.PL
make
make test

If "perl Makefile.PL" breaks or "make test" doesn't work at all ("not ok 1"),
you may not have the WNHOME or WNSEARCHDIR environment variables defined
correctly. Read the QueryData manual page ("perldoc QueryData.pm") to find out
how to tell it where your WordNet database is located (you'll need to edit
test.pl). Note that if you are are using Debian/Ubuntu and have the standard
wordnet package installed , you should set WNSEARCHDIR to /usr/share/wordnet.

If any of the tests fail, send e-mail to the wn-perl mailing list (see
DOCUMENTATION).

If the tests run okay, install with (this may need to be run as root):

make install

CUSTOM DIRECTORY
----------------

To install WordNet::QueryData in /foo/bar/baz do:

mkdir /foo/bar/baz/WordNet
cp QueryData.pm /foo/bar/baz/WordNet

Make sure to add /foo/bar/baz to perl's @INC variable (e.g. -I/foo/bar/baz)

COPYRIGHT
---------

Copyright (C) 1999-2006 Jason Rennie. All rights reserved.

This module is free software; you can redistribute it and/or modify
it under the same terms as Perl itself.

CITATION
--------

If you use this software as a contribution to a published work, please
cite it like this:

@misc{Rennie00
,author = "Jason Rennie"
,title = "WordNet::QueryData: a {P}erl module for accessing the {W}ord{N}et
database"
,howpublished = "http://people.csail.mit.edu/~jrennie/WordNet"
,year = 2000
}

KNOWN BUGS
----------

validForms does not implement WordNet's morphological processing
correctly. Only the first element of the list returned by validForms
is guaranteed to be valid. Later elements may not be valid.
WordNet-QueryData-1.49/test.pl000064400000000000000000000165601147026557300162620ustar00rootroot00000000000000#!/usr/bin/perl -w
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl test.pl'

# $Id: test.pl,v 1.40 2007/05/07 01:08:31 jrennie Exp $

my $i = 1;
BEGIN {
$| = 1;
}
END { print "not ok 1\n" unless $loaded; }
use WordNet::QueryData;
$loaded = 1;
print "ok ", $i++, "\n";

# Insert your test code below (better if it prints "ok 13"
# (correspondingly "not ok 13") depending on the success of chunk 13
# of the test code):

# run tests once for index/excl/data loading, and again without
for my $noload (1,0) {

my $wn;
if ($noload == 0) {
print "Loading index files. This may take a while...\n";
# Uses $WNHOME environment variable
$wn = WordNet::QueryData->new( verbose => 0 );
#my $wn = WordNet::QueryData->new("/scratch/jrennie/WordNet-2.1/dict");
}
else
{
$wn = WordNet::QueryData->new( noload => 1 );
}

#my $ver = $wn->version();
#print "Found WordNet database version $ver\n";

#print join("\n",$wn->listAllWords('n'));

($wn->querySense("sunset#n#1", "hype"))[0] eq "hour#n#2"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

scalar $wn->forms ("other sexes#1") == 3
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->forms ("fussing#2") == 3
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->forms ("fastest#3") == 3
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

scalar $wn->querySense ("rabbit") == 2
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

scalar $wn->querySense ("rabbit#n") == 3
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->querySense ("rabbit#n#1", "hypo") == 7
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

# check that underscore is added, syntactic marker is removed
($wn->querySense("infra dig"))[0] eq "infra_dig#a"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense("infra dig#a"))[0] eq "infra_dig#a#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense("infra dig#a#1", "syns"))[0] eq "infra_dig#a#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord("descending"))[0] eq "descending#a"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

($wn->querySense ("lay down#v#1", "syns"))[0] eq "lay_down#v#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->validForms ("lay down#v") == 2
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->validForms ("checked#v") == 1
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

scalar $wn->querySense ("child#n#1", "syns") == 12
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

(([$wn->validForms ("lay down#2")]->[1]) eq "lie_down#2"
and ([$wn->validForms ("ghostliest#3")]->[0]) eq "ghostly#3"
and ([$wn->validForms ("farther#4")]->[1]) eq "far#4")
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

($wn->querySense("authority#n#4", "attr"))[0] eq "certain#a#2"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

($wn->validForms("running"))[1] eq "run#v"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
# test capitalization
($wn->querySense("armageddon#n#1", "syns"))[0] eq "Armageddon#n#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense("World_War_II#n#1", "mero"))[1] eq "Battle_of_Britain#n#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
# test tagSenseCnt function

$wn->tagSenseCnt("academy#n") == 2
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

# test "ies" -> "y" rule of detachment
($wn->validForms("activities#n"))[0] eq "activity#n"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
# test "men" -> "man" rule of detachment
($wn->validForms("women#n"))[0] eq "woman#n"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

($wn->queryWord("dog"))[0] eq "dog#n"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord("dog#v"))[0] eq "dog#v#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord("dog#n"))[0] eq "dog#n#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord("tall#a#1", "ants"))[0] eq "short#a#3"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord("congruity#n#1", "ants"))[0] eq "incongruity#n#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

scalar $wn->querySense("cat#noun#8", "syns") == 6
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->querySense("car#n#1", "mero") == 29
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->querySense("run#verb") == 41
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->forms("axes#1") == 3
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord('shower#v#3', 'deri'))[0] eq 'shower#n#1'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->queryWord('concentrate#v#8', 'deri'))[0] eq 'concentration#n#4'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('curling#n#1', 'domn'))[0] eq 'Scotland#n#1'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('sumo#n#1', 'dmnr'))[0] eq "Japan#n#2"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('bloody#r#1', 'dmnu'))[0] eq 'intensifier#n#1'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('matrix_algebra#n#1', 'domt'))[0] eq "diagonalization#n#1"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('idiom#n#2', 'dmtu'))[0] eq 'euphonious#a#2'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('manchuria#n#1', 'dmtr'))[0] eq 'Chino-Japanese_War#n#1'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->validForms('involucra'))[0] eq 'involucre#n'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
$wn->lexname('manchuria#n#1') eq 'noun.location'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
$wn->lexname('idiom#n#2') eq 'noun.communication'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->validForms("go-karts"))[0] eq "go-kart#n"
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
# frequency() tests
$wn->frequency('thirteenth#a#1') == 1
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
$wn->frequency('night#n#1') == 163
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
$wn->frequency('cnn#n#1') == 0
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

$wn->offset("notaword#n#1");
my @foo = $wn->getResetError();
$foo[1] == 2
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

($wn->queryWord('person#n#1', 'deri'))[0] eq 'personhood#n#1'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
($wn->querySense('acropetal#a#1', 'dmnc'))[0] eq 'botany#n#2'
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
scalar $wn->offset("0#n#1") == 13742358
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

scalar $wn->listAllWords("noun") == 117798
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
$wn->offset("child#n#1") == 9917593
? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";
my ($foo) = $wn->querySense ("cat#n#1", "glos");
($foo eq "feline mammal usually having thick soft fur and no ability to roar: domestic cats; wildcats ") ? print "ok ", $i++, "\n" : print "not ok ", $i++, "\n";

}
 
дизайн и разработка: Vladimir Lettiev aka crux © 2004-2005, Andrew Avramenko aka liks © 2007-2008
текущий майнтейнер: Michael Shigorin